You can basically setup your own instructions and setup you own observation solutions...you can imagine everything from security to farm operations, the sky's the limit.
That is a demo of course but I think what sets LLM tools like this apart from what came before is that implemented correctly, the user gets to decide what it is, and can change the meaning at any time, in other words what it should be looking for at any time.
That is of course if the solution is implementation correctly.
There is immense potential for these type of capabilities if they are done in a way that leaves the specific use case implementation up to users.
Talking about it or explaining it is like pulling teeth; generally just a thorough misunderstanding of the notion....even though cryptographic certificates make the modern internet possible.
Any number of entities can be certificate issuers, as long as they can be deemed sufficiently trustworthy. Schools, places of worship, police, notary, employers...they can all play the role of trust anchor.
The app allows for self-revocation using the private key or a revocation code given when cert is issued, this is useful if a certificate is compromised...there is also an admin interface a trust anchor can use to revoke certificates they issue, a rogue trust anchor chain can also be revoked.
Each trust anchor gets issued a single certificate that can have delegation ability, ie the ability to issue new trust anchor certs to others.
So if say a UPS store is issued a cert and they go rogue, we can just revoke the trust anchor cert that was issued to the store, all certs issued further down are also automatically revoked...the revocation check is done either in the app or in the case of a third-party performing the verification they will recognize that there is a cert on the issuing chain that is revoked and reject the cert.
This is how TLS certs are handled too, if a CA goes rogue, all certs issued by that CA are revoked once the CA's root cert is revoked.
As for refund issues, that's a problem for the cert issuer to deal with.
> As for refund issues, that's a problem for the cert issuer to deal with.
no, it's your problem, as it's your brand slapped over everything, and now you've got tens of thousands of innocent people angry that you've revoked the IDs they paid for in good faith
When you say that “we” can revoke, I assume you are talking about your company - the app. What sort of resources would be required to constantly audit the potentially thousands or hundreds of thousands of certificate issuers on your platform?
All certificates are cryptographically linked to an identity-anchor certificate, meaning buying a certificate would require the seller reveal the private key tied to the identity-anchor certificate, a tall order I would argue.
In the case of stolen identity certificates, they can be revoked thus making their illegitimate utility limited.
We can still have laws, e.g. that using someone else's certificate (or knowingly giving them your certificate) would constitute fraud.
We have laws against kids buying alcohol, even though kids can (and do) try to get adults to buy them booze, but I don't think that's a good reason to say we shouldn't have laws against kids drinking.
This is about relying on requirements type documents to drive AI based software development, I believe this will be ultimately integrated into all the AI-dev tools, if not so already. It is really just additional context.
We are also using the requirements to build a checklist, the AI generates the checklist from the requirements document, which then serves as context that can be used for further instructions.
For folks who would prefer a more "full bodied" experience, we offer a UI configuration based alternative approach that supports JavaScript and Groovy, including an IDE environment integration.
Intriguing product, definitely something that is a major pain point for inventors.
I do wonder however, I see a couple of the profiles listed show 500+ patents.
Does this indicate that we are now in an era of full fledged "IP spam" or can you argue that inventors have in fact historically been under-rewarded by the difficulty of filing patents? Otherwise that is a lot of patents for someone who isn't building a spacecraft :)
Haha to be fair, the team I ran was an applied research group that’d partner with teams within the organization. So every month we’d do designs & bring in ML expertise to a new project. Plus we maintained a ton of tools. This resulted in a lot of patents every month. Particularly, around generative AI, autoML, federated learning, etc years before it was popular. So it was a fairly open space, resulting in a high number of patents
I am the developer of Solvent, it is a polyglot web app platform for the JVM and has long supported JRuby integration as indicated here:
https://codesolvent.com/web-apps/
I am not a ruby developer and even though I integrated it don't know anything about its internals. I am guessing if JRuby goes away GraalVM which supports Ruby will be its replacement?
GraalVM Ruby does not integrate with Java in the same way, and not nearly as seamlessly. JRuby allows implementing Java interfaces in a direct way that optimizes better, as well as extending and importing Java classes such that they look and feel like normal Ruby classes.
JRuby runs on all JVMs with or without Graal, where the GraalVM languages are tied to that runtime. The design of those languages also incurs heavy startup, warmup, and memory footprint penalties even greater than that of JRuby or the JDK itself, and those problems are not easily solvable.
JRuby will never go away, and as long as I have a say in it, development will continue full speed ahead. We are tackling some of our long-desired optimizations now, have near parity on Ruby language features with the unreleased Ruby 3.4, and we're very excited for the future of the project.
Basically if you have the actual "factual" information, use it directly instead of hoping the LLM will accurately extract it and use it as part of a function call. In this case they already know what the accurate URLs are, just use it.
Where I currently work, our function calls regularly fail only to succeed flawlessly on a retry. (I believe we’re on the order of 10s of millions openai calls a day)
These are non-deterministic systems. I wouldn’t even trust them to accurately extract text until you did a beam search or something similar to kind of average out different LLM outputs
I am the developer and happy to answer questions.
You can basically setup your own instructions and setup you own observation solutions...you can imagine everything from security to farm operations, the sky's the limit.