Last time I discussed why we believe clinical trials and experimental evidence are so much better than anecdotal evidence. A reason that I did not include for truly appreciating scientifically derived data is a little thing called peer review.
Wikipedia describes peer review as the evaluation of work by one or more people of similar competence to the producers of the work. It is a form of self-regulation by the members of a professional or scientific community.
In order for peer review to occur, research has to have a certain level of transparency. Experimental procedures and results must be put out there for scrutiny. And that is where this page comes in.
Seekingalpha.com publishes stock market insights and financial analysis. They had a source at the 2018 Macular Society meeting where Apellis presented their findings on APL-2. Gleaning information from the PowerPoint slides, an author for seeingalpha.com developed the opinion APL-2 may not be all that Apellis has been cracking it up to be. [Lin/Linda: the full research results have not yet been published. The slides from this meeting is the best source we have of the preliminary results.]
Why? Well, for one thing he said the rate of conversion from dry to wet AMD was 21%. Ouch. That is just about 1 in every 5 subjects. The average number of dry AMD patients who convert to wet is between 10 and 15% (although in the trial, there was only 1 person or 1% of those in the sham group who converted to wet; that’s slide 14). Rounding it off, that is about one patient in 8. There may be something in APL-2 causing an increased conversion rate. [If you look at slide 16, they have some theories on why this happened.]
Another issue the author of the article had had to do with what should be the primary outcome measure. Apellis measured the rate of growth of the lesion. That slowed down. There was no one debating that. What was being questioned was why visual acuity was not used as the primary outcome measure. After all, if you can see, who cares how big the hole in your retina actually is?
Speaking as me, Sue, I would have assumed the size of the lesion is related to how well you see. Slower growth would mean better retention of vision, right ? Not necessarily. While Apellis reported a modest, positive difference between monthly treatment and sham at 12 months into the project, this positive difference did not persist. While at 12 months monthly treatment had lost 3.3 letters on the chart and sham had lost 4.4, by month 18 monthly treatment had lost 7.7 and sham had lost 6.4. What the hell????? The data suggested a greater loss of acuity with the treatment than without it.
Bottom line for the seekingalpha.com writer was this: don’t sink a lot of money into Apellis stock. What is the bottom line for us? How about a few questions answered? Or at least a few, clever theories? I would like to know why saved macula does not equal saved acuity. That seems counterintuitive.
Another question would be how did they get through phase 1, safety and tolerability, if subjects were coming out in worse shape (if they were; I don’t know.) ? Are the acuity data for phase 2 all that different?
And the $64,000 question: should anyone volunteer for their phase 3 trials?….that one you answer for yourself.
Lin/Linda here: I was curious as to what the exact difference is between a placebo group and a sham group. Here it is: “Placebo and sham treatment are methods used in medical trials to help researchers determine the effectiveness of a drug or treatment. Placebos are inactive substances used to compare results with active substances. And in sham treatments, the doctor goes through the motions without actually performing the treatment.”
Written March 3rd, 2018