Hacker News new | past | comments | ask | show | jobs | submit | tashian's comments login

I did a little synth project recently that uses an AudioWorklet processor to morph between single-cycle waveforms, and it worked super well. When I tried to do this with the Web Audio API, the audio would stutter when I moved the controls. Switching to an AudioWorklet thread eliminated the stuttering issue. So, if you need real-time sound shaping controls, you may find that AudioWorklet is a better fit.

https://waves.tashian.com


or to hide the AI content areas


I used Claude to help me build a side project in 4 hours that I would never have built otherwise. Essentially, it's a morphing wavetable oscillator in React (https://waves.tashian.com).

Six months ago, I tried building this app with ChatGPT and got nowhere fast.

Building it with Claude required a gluing together a few things that I didn't know much about: JavaScript audio processing, drawing on a JavaScript canvas, an algorithm for bilinear interpolation.

I don't write JavaScript often. But I know how to program and I understand what I'm looking at. The project came together easily and the creative momentum of it felt great to me. The most amazing moment was when I reported a bug—I told Claude that the audio was stuttering whenever I moved the controls—and it figured out that we needed to use an AudioWorklet thread instead of trying to play the audio directly from the React component. I had never even heard of AudioWorklet. Claude refactored my code to use the AudioWorklet, and the stutter disappeared.

I wouldn't have built this without Claude, because I didn't need it to exist that badly. Claude reduced the creative inertia just enough for me to get it done.


What was your workflow for doing that? Just going back and forth in a chat, or a more integrated experience in a dedicated editor?


Just copy/paste from the chat window. I kept running into token limits. I came away from it wanting a much better workflow.

That's the next step for me in learning AI... playing with different integrated editor tools.


Good point.

Primarily, the YubiKey is there to lock away the private key while making it available to the running CA. Certificate signing happens inside the YubiKey, and the CA private key is not exportable.

This uses the YubiKey PIV application, not FIDO.

As an aside, step-ca supports several approaches for key protection, but the YubiKey is relatively inexpensive.

Another fun approach is to use systemd-creds to help encrypt the CA's private key password inside a TPM 2.0 module and tie it to PCR values, similar to what LUKS or BitLocker can do for auto disk unlocking based on system integrity. The Raspberry Pi doesn't have TPM 2.0 but there are HATs available.


It's true, the defaults are quite strict.

As for the "hours" max interval, this is the result of a design decision in Go's time duration library, dealing with the quirks of our calendaring system.


This is the api presumably. Not sure about what would prevent days, wasn't familiar with the lore.

https://pkg.go.dev/time#ParseDuration


It's because units up to hours are of a fixed size, but days in most places are only 24h for ~363/365 days of the year, with some being 23h and some being 25h.

(This is ignoring leap seconds, since the trend is to smear those rather than surface them to userspace.)


I love this idea!


Hi, I'm the author of the post. Thanks for your questions here.

> -Complete overkill requiring the use of a YubiKey for key storage and external RNG source - what problems does this solve? For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk. So if the device is compromised, they can just issue their own certs. If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket.

Yep, it's overkill. Homelabs are learning environments. People want tutorials when trying new things. It's a poor man's HSM because not many people will buy an HSM for their homelab, but almost everyone already has a YubiKey they can play with.

The project solves the problem of people wanting to learn and play with new technology.

And it's a way to kickstart a decently solid local PKI, if that's something you're interested in.

The RNG is completely unnecessary flair that just adds to the fun.

> -Creates a two-tier PKI... on the same device. This completely defeats the purpose so you can't revoke anything in case of key compromise. > -They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.

The tutorial shows how to generate and store the private key offline on a USB stick, not on the device or the YubiKey. The key material never touches the disk of the Raspberry Pi.

Why store a copy of the CA keys offline? Because YubiKeys don't have the key-wrapped backup and restore feature of HSMs. So, if the YubiKey ever fails, you need a way to restore your CA. Storing the root on a USB stick is the backup. Put the USB stick in a safe.

If you want active revocation, you can set it up so that the intermediate is revocable—in case physical theft of the key is important to you. (We have instructions to do that in our docs.)

> -All this digital duct taping the windows and doors yet the article instructs you to download and run random binaries off GitHub with no verification whatsoever.

It's open source software downloaded from GitHub. The only non-smallstep code is the RNG driver (GitHub is the distribution point for that project). Was there a kind of verification that you expected to see?

> -Why do you need ACME in a homelab and can't just hand issue long lived certificates? -OpenSC and the crypto libraries are notoriously difficult to set up and working properly. A tiny CA this is not.

Most people don't need ACME in their homelab, they just want to learn stuff. That said, we have homelabbers in our community issuing certs to dozens of endpoints in their homelab.

Whether you issue long-lived or short-lived certs is a philosophical issue. If a short-lived cert is compromised, it's simply less valuable to the attacker. Short-lived certs encourage automation. Long-lived certs can be easier to manage and you can just manually renew them. But unplanned expiry of long-lived certs has caused a lot of multi-million dollar outages.

I hope this helps clarify things.


Despite the critical feedback you've received above, I found the article interesting, and having a homelab with several spare Pi's, it's got me considering setting a CA up. Thank you.


How should a company figure out what to charge for something in the first place? Especially a startup that doesn't have much market data to go on, and may be making something entirely new that no one quite knows the value of. When this is the case, one option is to do price discovery. And the way to do that is to remove prices from the website, take calls, learn about customers and their needs, and experiment.


> and may be making something entirely new that no one quite knows the value of.

How many such companies even exist at any given point in time? In software in particular, that's going to be almost none, and those few that are, won't be that for long. For everyone else, there are already competitors doing the same thing, and even more competitors solving the same problem in a different way[0], giving you data points for roughly what prices make sense. Between that and your costs being the lower bound, you almost certainly have something to work with.

--

[0] - There's no "someone has to be the first" bootstrap paradox here. Even if you're lucky enough to genuinely be the first to market with something substantially new, it still is just an increment on some existing solution, and solves a variant of some existing problem, so there is data to go on.


When you don't how valuable it's going to be, you at least know how expensive is it to make.

For a company wanting to make a profit, you need to cover your costs, so that's a minimum, with some reasonable profit on top.

If you can't figure that out either, well...


If client pays for a link that’s part of a chain, and doesn’t want the chain broken, and still has profit, it means client can pay more, that link is worth more.


AI agents run in isolated VMs, but PDFs have been out here running in the open for 30 years!


But can your PDF run an AI agent?


> But can your PDF run an AI agent?

Oh it's so much worse than that. Your font can run an AI agent.

Llama.ttf: A font which is also an LLM -- https://news.ycombinator.com/item?id=40766791


Crazy. Looking forward shipping apps as .ttf instead of docker images.


You can also play Tetris in a font: https://www.youtube.com/watch?v=Ms1Drb9Vw9M&t=1370s

(disclaimer: own work)


Well a font using a custom experimental shaping library. Your font can't do it normally.


In my opinion the question isn’t so much “if” but rather “when”.

When will AI research and hardware capabilities reach a point that it’s practical to embed something like that into a regular document?

We’ve already seen proof of concept LLMs embedded into OpenType fonts.

I guess the other question is then “what capabilities would these AI agents have?” You’d hope just permission to present within that document. But that depends entirely on what unpatched vulnerabilities are lurking (such as the Microsoft ANSI RCE also featured on the HN front page)


For Chrome's PDF renderer, the runtime is V8, so we're literally one (hilarious) line of code away from this glorious future existing today:

https://pdfium.googlesource.com/pdfium/+/refs/heads/main/fpd...

> // Use interpreted JS only to avoid RWX pages in our address space. Also, --jitless implies --no-expose-wasm, which reduce exposure since no PDF should contain web assembly.

> return "--jitless";


You could write an LLM in plain JS, right?


Yep, but one without the ability to even JIT down to vectorized CPU commands (to say nothing of GPU connectivity) would be incredibly slow indeed!


Looking forward to a day when you may not have a powerful enough GPU to open a PDF


The first widespread AI Malware will be a historic moment in this century. It will adapt like a real biological virus to its host and we have no cure for this.


We could unplug all the GPUs.


Reminds me of the Hamming distance texture: https://chalkdustmagazine.com/features/the-hidden-harmonies-...


Reminds me of glitching the integer circle algorithm: https://nbickford.wordpress.com/2011/04/03/the-minsky-circle...


If the Xor texture is Xor(x,y) then the hamming distance would be PopCount(Xor(x,y))

(I think, maybe?)


Yep!


I hadn't heard of this one; great article and website, cheers :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: