Heh, given the title I initially thought SentinelOne was addressing the Chris Krebs situation, and the adversary would be the current administration.
But it's about different nation state actors.
Wow, so if you don't fall in line with the demagoguery, you'll be thrown out, probably to be replaced with someone who does, or it'll be rinse and repeat until that happens.
In Article III, Section 3 of the United States Constitution, treason is specifically limited to levying war against the U.S., or adhering to their enemies, giving them aid and comfort.
Under U.S. Code Title 18, the penalty is death, or not less than five years' imprisonment (with a minimum fine of $10,000, if not sentenced to death). Any person convicted of treason against the United States also forfeits the right to hold public office in the United States.
The constitution sets a really high bar on Treason. “It was not enough, Chief Justice John Marshall’s opinion emphasized, merely to conspire “to subvert by force the government of our country” by recruiting troops, procuring maps, and drawing up plans. Conspiring to levy war was distinct from actually levying war.” https://constitutioncenter.org/the-constitution/articles/art...
“No person shall be convicted of Treason unless on the Testimony of two Witnesses to the same overt Act, or on Confession in open Court.”
Cramer v United States being an interesting example. ‘As the Court explained: “A citizen intellectually or emotionally may favor the enemy and harbor sympathies or convictions disloyal to this country’s policy or interest, but, so long as he commits no act of aid and comfort to the enemy, there is no treason. On the other hand, a citizen may take actions which do aid and comfort the enemy—making a speech critical of the government or opposing its measures, profiteering, striking in defense plants or essential work, and the hundred other things which impair our cohesion and diminish our strength—but if there is no adherence to the enemy in this, if there is no intent to betray, there is no treason.” In other words, the Constitution requires both concrete action and an intent to betray the nation before a citizen can be convicted of treason; expressing traitorous thoughts or intentions alone does not suffice.’
It was an interesting read whilst having a cup of coffee. But rather shallow. A couple of mentions of some tools: goreshell, shadowpad, scatterbrain. It might be targeting C-suite folks more than analysts or other security folks. It is more about how you should be slightly afraid to do it on your own and better hire sentinelone to help you.
Now that you mention it, the article does read like curated content. I suppose a piece does not have to be directly selling anything to be an advertisement. Fluff can do just as good a job by simply making readers feel good about a brand.
The essence of the article is a topic of concern, but is expressed
rather lightly in TFA. End runs around security happen at the
edges. From the bottom; by undermining hardware, or code libraries,
supply chains. And we're now seeing "decapitation attacks" right at
the top. Our "western" security models have a weakness, with their
roots in Prussian military organisation and bureaucratic technical
management, by default they trust up. The whole DOGE caper (what I
would call a Dr Strangelove scenario - variation of insider-threat)
exposes this as actually very vulnerable.
Cybersecurity services that operate as MSPs (the acronym variation
where S is for security) hit a fundamental problem. A managed security
provider becomes a bigger and juicer target since all of its clients
are implied spoils. If they in turn defer-to/buy-from bigger actors up
the food chain, those become juicer targets too.
This a frequent chestnut when we interview cybsersecurity company
CEOs. Although it resurfaces the old "Who guards the guardians?",
there is more to it. One has to actively avoid concentrating too much
"power" (non-ironically a synonym of vulnerability ... heavy lies
the crown) in one place, but to distribute risk by distributing
responsibility for building trust relations (TFA mentions this). I
expect we'll see more and more of this sort of thinking as events
unfold.
2025 RSA Conference USA in San Francisco. So lots of papers are going to be presented and talks given on new clever ways researchers have figured out to beat different layers of security, tracking APT's, etc.
I hope you're entirely kidding with that statement.
RSA was famously bribed by the NSA to make their compromised PRNG the default in their cryptography library, which shipped from 2004 to 2013. Any credibility they might've had vanished after that was publicized in the Snowden leaks.
I tuned in late to this show. Are they down to tHe DPRK because they already successfully rooted out the MOSSAD, CIA and NSA insiders in previous episodes?
Biggest thing you can do is just ensure you conduct at least 1 on-site interview, and make sure that interviewer is in a position to realize if the person they met is not the same one who shows up for other interviews and/or the work. Cost of a flight is nothing really compared to recruiting and hiring (and if you really are fully-remote and geographically distributed, you probably already have somebody in their metro area), on-sites used to be standard.
I mean, it's not the biggest thing you can do; you could start selling to the government, become a cleared contractor, and then you could require a USG security clearance for job applicants.
I would call the on-site interview and/or minimal background check "the most pareto frontier thing you can do."
How much of that would you get from just using e-verify? That doesn't find criminal issues like a security clearance does but seems like it would at least reduce the pool of nefarious applicants by a significant margin.
The solution is just-in-time access controls, context-aware authorization for things like database access (i.e. given a justification with an approval workflow, the employee can access a user X for 2 hours). These are the guard rails against a rogue employee, by introducing friction.
I rolled out these level of controls at a big company and got push back from the sales team -- they needed access to generate leads. do demos on the spot, etc. Was a hard fight and I lost.
Just make them show up in person at least once for onboarding. They're not going to fly out from China or Russia (where they tend to be based) to do this; especially not to the US.
Verify their ID in person, issue their laptop etc in person, make sure someone who interviewed them is there to meet and greet them (and attest that it's the same person they talked to.)
If you can at least do a final interview in person also, then that's even better.
I run outsourcing agency, we work with US clients and have seen lots of fake applications (different degree of sophistication), so far we have either rejected them right away, or we were able to filter them during (remote) interviews.
Definitely the 'regular' application procedures - check someone's ID, check their references, ideally meet them face to face, etc.
This is more tricky with remote-only jobs or worse, "gigs" where you don't even meet people. But also, I would've expected open source to be "infiltrated" a lot more than it has, since that's very much anonymous internet culture... but also a culture of code reviews and the like.
The latest advice about spotting at least north koreans who apply under fake identities is asking them to comment on how fat Kim Jong Un is. Real north koreans could not comment on that..
Yes there are lot of identifiers. They are improving a lot, so things are changing daily. There are certain steps to take pre hiring and post hiring. If you need help share your email and I can provide details.
Start with a fingerprint check before you even talk to them.[1] Then ask for a REAL ID at the interview, take fingerprints again, and match with the ones from the pre-screen fingerprint check. You need to be signed up with a driver's license verification service to validate the ID.[2]
It takes that level of verification to become a security guard or a school bus driver. Anybody in computer security should be doing this.
I live in China, a supposedly autocratic country and one with universal ID, and even companies here don't take fingerprints. ID will be shown when you are officially onboard. I can't say for all, but for most companies (at least the ones without the need for a security clearance), requiring ID at interview will be seen as a red flag, and requiring fingerprint would probably be put on social media and name shamed, if not straight up reported to the authorities.
I have some experience working for financial institutions with access to highly confidential information, and haven't been required to produce my fingerprint for, like, ever.
Again, I can't say for all, and I'm sure there are certain companies and positions which require such measures, but I could not imagine requiring fingerprints (or even ID during interview) to be acceptable in most cases.
You didn't have to do an in-person background check that included fingerprinting? When I worked at a bank this was required. It was run by a third party company not at the office.
You probably worked in divisions where the auditors didn’t issue a finding yet, or outside the regulatory scope.
It’s pretty common in finance, government and human services. Amazon is very aggressive with this - contractors in their facilities get regular background checks.
Usually the employee goes to a third party run by a company like Idemia to collect the biometric. I can’t imagine not collecting the ID information of perspective employees - that’s just asking for fraud.
In a high security environment, you can get a report from law enforcement; in the Netherlands this is called a "declaration around behaviour" (??), which is basically a signed / authenticated document saying "this person was not involved in financial crimes" - you need to have it specified for a category of crimes, the previous is for example one I had to get to work at a bank as a contractor.
You just can't secure something like Windows, Linux, MacOS, because it's faulty by design. Any business that claims to be able to do so is selling snake oil.
Capability based operating systems can be made secure. Data diodes are a proven strategy to allow remote monitoring without the possibility of ingress of control. Between those two tools, you have a chance of useable and secure computing in the modern age, even against advanced threats.
Yeah... I feel like Cassandra, but here we are. You've been warned, yet again.
I agree about data diodes, but how do you handle data egress? One solution is to have strict data checks on egress, but leaks are still possible.
Data diodes also still suffer from the ability to inject malware that can execute DOS attacks.
I agree about capability-based security, but strictly speaking, the capabilities of current OS are just primitive, i.e. checking file permissions. What capability checks do you mean?
My understanding is that the biggest threat is not capability checking, but capability escalation, i.e. bypassing checks, and hardware hacking, e.g. spectre/meltdown-type attacks that can read arbitrary memory.
There is a step up from diodes called [inspecting] data guards and an adjacent technology called content disarm and reconstruct (CDR) that doesn't rely on signatures or heuristics - it just assumes every document is malicious.
Combining these 3 technologies with certain policies, e.g. 2 man rule, the hw/sw itself developed on airgap you can make it practically impossible to attack, even for nation state adversaries.
Edit to point out that these all work in 2-way configurations as well.
What OSes are you proposing though? You're positing a problem and warning people, but what are the alternative operating systems that implement these data diodes?
Google’s in development (contrary to what people on here will tell you) new operating system Fuchsia actually has what seems to be a genuinely defendable architecture.
hmm but this is not really about it, it is more about how companies can be protected. It talks e.g. about shadow IT workers trying to infiltrate into the company.
This is one of those situations, like with cryptocurrencies or social media, where the old thing had certain problems for pretty fundamental reasons, and the new thing claims it won't have the same problems, but that's just because the new thing is new and hasn't gotten to the point of the problems being discovered yet.
If an operating system can run any program you want, then it can run malware if you want. Windows, Linux and Mac OS are OSes that let you run any program you want. Android and iOS are OSes that restrict which programs you can run. Different techniques end up placing the boundary in different places but they still either limit you from running lots of nonmalware programs or they allow you to run lots of malware.
Operating systems already completely sandbox processes. Then they poke a ton of holes in the airtight hatchway because holes are useful. Suddenly it's not airtight, but at least it's useful. Then someone make a new OS with a holeless airtight hatchway. In time, it too will discover which holes it needs, and won't be airtight.
Something similar happens with data diodes. A reply mentions punching holes in a data diode by allowing certain limited two-way communication. Fine, but then it's not a data diode. And someone will suggest putting a data diode on one side of your not-data-diode to make it airtight again. And you'll have the problems of a data diode again.
> Recent adversaries have included:
DPRK IT workers posing as job applicants
ransomware operators probing for ways to access/abuse our platform
Chinese state-sponsored actors targeting organizations aligned with our business and customer base
(context: https://www.cnbc.com/2025/04/16/former-cisa-chief-krebs-leav... )
reply