Better A/B Testing with Survival Analysis

I've already made the case in several blog posts (Here, [here](https://www.linkedin.com/pulse/better-churn-prediction-part-3-iyar-lin-ov5af/) and here) that using survival analysis can improve churn prediction.
In this blog post I'll show another use case where Survival Analysis can improve on common practices: A/B testing!
The problems with common A/B testing practices
Usually when running an A/B test analysts assign users randomly to variants over time and measure conversion rate as the ratio between the number of conversions and the number of users in each variant. Users who just entered the test and those who are in the test for 2 weeks get the same weight.
This can be enough for cases where a conversion either happens or not within a short time frame after assignment to a variant (e.g. Finishing an on-boarding flow).
There are however many instances where conversions are spread over a longer time frame. One example would be first order after visiting a site landing page. Such conversions may happen within minutes, but a large portion could still happen days after the first visit.
In such cases the business KPIs are usually "bounded" to a certain period – e.g. "conversion within 7 days" or "churn within 1 month".
In those instances measuring conversions without considering their timing has 2 major flaws:
- It makes the statistic we're measuring unintelligible – Average conversions at any point in time does not translate to any bounded metric. In fact as the test keeps running – conversion rates will increase just because users get more time to convert. The experiment results will be thus hard to relate to the business KPIs.
- It discards the timing information which could lead to reduced power compared with methods that do take conversion timing into account.
To demonstrate point #2 we'll run a small simulation study
We'll have users join the experiment randomly over a 30 day period. Users' time to convert will be simulated from a Weibull distribution with scale