As a fellow Brit I’ve always been puzzled by this too.
The DVLA website and online processes are very good. Which is rare to say about a government IT system. I don’t think I’ve ever had any problems for a decade (or two?) dealing with everything driving-related fully online.
I expect since we just have one single system, and a smaller number of possible workflows/situations to cover, it’s easier to create a central online system. In the US, with 50 different variations perhaps it’s a harder problem to solve.
FWIW I’m tentatively bullish on AI (for specific use cases) but I have never once… ever… ever… had a useful interaction with a customer service chatbot.
Regardless of the small print attempting to absolve the provider of any responsibility for anything (as such companies like to do) it does still sound like they have behaved unreasonably and made a bad situation worse by not being more collaborative with the customer.
Having seen unnecessarily unhelpful behaviour like this before, it is infuriating and deserves to be called out.
I will say that the OP seems to have a possibly unrealistic expectation on who is responsible for security. It is very rarely quite as binary as it seems to be being described. I could be wrong, not knowing all the details…
Regardless, it still sounds like Hostinger have done very little to help.
Well we could all wish that being or aiming to be helpful was a standard throughout the industry - sadly it in my experience, and even your notion of unhelpful is indicative of the behaviour that has become more the par present times in regard of services offered - especially by larger companies which have adapted to the cheapest means to deal with external complaints.
As you point out, we are in the dark as to just what the hosting services were, there's mention of a server but we can only speculate. However the company does point out in their TOS, the customer is responsible for backing up their own data - but saying that we (as non customers) also don't know just how easy they allow the customer to do that.
Getting to the nature of the complaint - Obviously any company such as Hostinger has to position to minimise legal threats as per what they ultimately are responsible for hosting, and would do so by setting up processes to monitor for threats, detection measures such as looking for file fingerprints, possibly by using A.I. in a clever means. They'd also have a process to handle external complaints from people or companies on the web.
I can sort of guess what might have started the ball rolling,[1] (complaints by various domain providers) but that in itself, is not a proof, just that someone did not address those whatever they were issues in a timely manner. The person within Hostinger tasked to deal with the external complaint, given an apparent non reversal a suspended domain with the customer apparently simply ignoring the fact they have a suspended domain; isn't probably going to go to great depths to call in the system admin to confirm. Now they might be wrong given the small number of edge cases where it's all just a mix up or some other honest problem ... how many domains and with how many different providers?
[1] >They denied us access to our own data, even for non-suspended domains
I was bored so decided to dig a little as it interested me. I found [1] to be helpful explanation of services Hostinger offers. I would gather from [2] OP's business ought to have had access to on demand back up -- as such I would have the expectation that a large streamlined hosting service, would be able to provide a given number of back ups exported to any practical external storage area via a variety of protocols -- however again, as a non customer I have no idea if what I would expect is the actual situation. However running 70 sites without any form of external just in case back up, or monitoring them daily / closely, was an accident waiting to happen -- accidents happen and the prominent OVHcloud incident in 21 should still linger in those tasked to secure their web based company's future. Other technical fubar accidents can also happen like ssd raid dying catastrophically. I myself was amused when my own data in server was updated and data migrated to a ssd raid 5 ... it failed weeks later - expectation was it would be at most a partial loss, however hard the system admin tried over a couple days, little could be recovered, just a couple of gigs of old images - thankfully all but the newest files were backed up so it was easiest to just let it go and start from scratch. Ssds are IMO very unforgiving but in time they will get better in regard to failure detection.
dang seems to be saying that he did add the “d” though?
FWIW I would have preferred it to be just left as “uses” per the article title.
reply