I worked for a sister company. These features were A/B tested, their effect on bookings measured, and the numbers affected the bonuses of POs. They were thus incentivized to try these kinds of annoying tricks, because ultimately they work, or at least they do when their effect is observed only in terms of bookings.
I asked several times if we somehow measured customer retention, as we kept adding stuff I thought would have a negative effect in the long term (a customer would go through the whole booking process, only to never return because the experience was too bad.) We didn't. I guess it's difficult, I have no idea how this kind of metric would be computed.
The checkbox is a nice idea, it could help retain customers on the verge of leaving. It wouldn't work for customers who visit once and get so annoyed they never come back.
Also: a good portion of the traffic was affiliate traffic, like from Google Maps. I guess people booking from a Google Map listing care less since they land on the hotel page ? They won't be looking for hotels through the site UI, so they get fewer annoying pop-ups and messages.
So one thing we did was for (a/b style) experiments to continue to collect data even after taking a decision to enable or disable[1]. No new sessions/devices would be added to the experiment, but those already exposed would still be reflected. So if everyone who decided to book because of an experiment ends up cancelling or consistently providing more negative feedback over the months that follow, our automation could still flag that.
Whatever we may say about the company and its practices, the engineering team working on the experimentation tooling had their heart in the right place. They worked extremely hard and with tremendous care on trying to make sure the data presented would cause the best possible decisions.
[1] Caveat: if an experiment was disabled fully (doesn't work or buggy) or enabled fully, obviously the experiment measurement is now no longer clean because folks in one variant will now see the other. But for trailing metrics like anything that affected past booking decisions, this type of analysis may not be perfect but still carries some meaning as a health check.
I asked several times if we somehow measured customer retention, as we kept adding stuff I thought would have a negative effect in the long term (a customer would go through the whole booking process, only to never return because the experience was too bad.) We didn't. I guess it's difficult, I have no idea how this kind of metric would be computed.
The checkbox is a nice idea, it could help retain customers on the verge of leaving. It wouldn't work for customers who visit once and get so annoyed they never come back.
Also: a good portion of the traffic was affiliate traffic, like from Google Maps. I guess people booking from a Google Map listing care less since they land on the hotel page ? They won't be looking for hotels through the site UI, so they get fewer annoying pop-ups and messages.