“It’s an achievement in itself,” says Jinyang Liang, the leading author of this work, who was an engineer in COIL when the research was conducted, “but we already see possibilities for increasing the speed to up to one quadrillion (1015) frames per second!”
Honest question, what can be seen at one quadrillion fps that 10 trillion cannot already see? This question is out of pure ignorance and wonder, like "Why would you ever want to move faster than a horse". But this is at a scale I just cannot think in anymore.
At that speed you can measure the "echos" of light, using femto-photography and some computing, it's possible to image objects around corners, outside of the cameras line of sight.
Years ago I saw an atomic explosion film taken with nanosecond frames and the comment was, it wasn't fast enough since each frame showed entirely different images
Here's a great example with something people have a really hard time wrapping their heads around too.
Let's say that you have a train moving at a constant velocity, v. Then a switch is flipped and turns on a light that is in the exact center of the room. Which wall does the light hit first? [0]
Seen from on the train[1]: might as well be seen as if you were in a stationary room. The light hits both walls at the same time.
Seen from off the train[2]: The speed of light is constant. Since the train is also moving the light can't travel at v+c. So it hits the back wall, which is traveling towards the light at v, first.
This is a famous thought experiment and in practice would be difficult to perform, even with such a camera. But I'm saying it because it illustrates that we can actually observe relativistic effects. Things act extremely differently than what we're used to when small or moving fast. Assuming you had a really high resolution, something like length contraction could be observed, and measured, in normal conditions.
So there are actually a lot of weird things going on that we wouldn't be aware of. These ultra high speed cameras allow us to observe some of these strange phenomena.
Watching this confuses me as to the timing of it all... if the conceit is that we're watching the bundle of photons move through the bottle that's because the photons from the source are hitting the camera, right? So is it the light moving through the bottle + the travel time to the sensor? Should refraction and reflection across the surface cause a lot of weird visual interference (as the bottle's size is no longer insignificant the time scale for lights velocity)? Is that what we're seeing there?
When the light reaches the camera, it has already moved roughly the same amount forward through the bottle. So my guess is yes, each frame is visibly showing "the past".
My guess is also that, if the laser is really focused, and if the bottle wasn't in the way, refracting some light back to the camera, we wouldn't even be able to see anything.
I had this similar realization when I watched that video from 2011 the first time around (hard to believe it was 7 years ago).
It also gave me this odd sense of blindness, in that I cannot actually see what's in front of me, only what literally interacts with my retina, almost like it's made of tastebuds for light. Still weirds me out when i think about it.
Yes, normally to video a car going past I'd take a series of photos as it goes past once.
The approach here would be to take a photo of a car, then drive the car past again from the start and take another photo slightly later, then again, and again driving it past hundreds of times.
They split this up even more, but I think the basic analogy holds to show the key difference.
I believe the approach in your video relies on a single laser pulse per frame, with many laser pulses combined to create the series of images for the video. The new approach, in contrast, emphasizes that the capture is done using a single laser pulse captured sequentially in real time.
in orders of magnitude: about 10^13 * 10^-44 = 10^-31 ... not very close. If you amplify planck time to 1 s, a frame of this cam would take about 10^14 times the age of the universe (10^17 s) in comparison.
Let's say you have a very bright light source, and that light source results in shining a full 100 Watts of light into the sensor of your camera (I'm using this not as a typical example but because it will make it easy to scale the answer). Photons of visible light have an energy of at minimum 1.5 electron-Volts (800nm red light), which means that 100 Watts of light represents 4.2e20 photons per second.
And that means that with only 100 Watts of light reaching your sensor you cannot attain an fps higher than 4.2e20, because at that speed you'd only get on average around one photon per frame. More realistically you need tens of thousands to millions of photons per frame to have some meaningful level of dynamic range and spatial resolution, which limits the fps to around a quadrillion fps per 100 Watts of light falling on the sensor.
Though once you get into that range you also have problems of signalling, we don't really have electronics that work at those speeds.
the fastest time-resolution that is generated is in a niche field, attophysics, where they can get a pulse in the low hundred attosecond range, 10 ^ -16 or so.
the signal is extremely weak, the conventional 'shortest-pulse' is around 5 femtoseconds.
First they are producing too many collisions (probably many quadrillion of collisions, I can't find the exact number). Most of the are boring, so they have a lot of filters to select and save only the slightly interesting collisions, because otherwise it would be impossible to save the data. IIRC they only got a few thousand of Higgs bosons, he signal to noise ratio is almost 0.
Second to film something using light, the object must be bigger than the wavelength of the light. There are some tricks to reduce this a little, but you can't film elementary particles directly. The old method was to let the particles create small bubbles or drops of water inside a recipient, and then photograph the chain of dots with visible light. In the new equipment the process is more complicated, but the interesting part of the collision is too fast and too small. They only measure the particles that are flying away after the interesting part happened, they only measure the leftovers and try to reconstruct the actual collision.
Third, the interesting collisions have strong quantum effects, and the quantum effects "disappear" when you are using light that is strong enough to get an accurate position of the particles. You can imagine that if the light is strong enough, then the light itself will bounce against the particles and modify the direction and the collision. It sounds somewhat like magic, but it's possible to write this correctly and precisely using a lot of math and is one of the bases of quantum mechanics. https://en.wikipedia.org/wiki/Uncertainty_principle
A photon needs to actually hit the sensor in order to be recorded, so if we're considering the photon to be a particle (rather than a wave) it would have to be in the same position in order to be in two frames at all.
For 635nm red laser light, you'd need to be sampling somewhere in the order of 9.4 x 10^14 times a second to get two samples per cycle. Based on roughly 300000000m /635nm x2 but then the question comes down to how many cycles make up a photon and does that question even. Make sense in the first place.
I was working from the idea of a nyquist or sampling rate needed to "perfectly" reproduce a wave form. I know this works for lower frequencies but I have no idea how well it translates to something like optical frequencies/light. You can find frequency from speed / wavelength: f = v/l or in our case f = c / l since we're dealing with light.
I was using 635nm as the wavelength (basically a red laser). That gives you:
3 x 10^8 / 635 x 10^-7 = 4.7 x 10^14
Which should be about 470 terahertz (4.72 x 10^14) give or take a bit. To sample that "perfectly" you'd need to sample at twice the frequency or about 940 terahertz or (9.4 x 10^14).
As to your second question, I know that there are single photon detectors. Past that, you've got me. I don't know if that can be classified as "seeing" or not. As to size, there's https://briankoberlein.com/2015/04/14/thats-about-the-size-o... but that might or might not make sense.
That's about the limit of what I'm willing/able to say on the subject.
For the camera to pick up 1cm of difference between frames, you have to capture 3x10^6 frames per second. You can do the limit for how much you want your delta to be just by multiplying by the speed of light.
Thanks for the links! I see your point, but lots of people wouldn’t know about this if it was just a press release. I know I don’t have enough time to check for them.
It doesn't really capture 10 trillion frames per second. The laser pulses at a regular interval, and the combination of the regular pulsing + multiple pictures taken at different points along those pulses makes for 10 trillion distinct frames captured within the timeframe of 1 second. There is no actual video captured.
I get that being a tech journalist is difficult, you have to juggle between the tech and the layman. But after writing a headline, read it back to yourself and ask: will this put the wrong idea in people's minds? If the answer is yes, rewrite it... even if it sounds less cool.
I don't quite understand how your original point refutes the headline. They are saying that it captures frames at a rate that, measured in seconds, would be ten trillion frames per second.
It does not capture events as they occur. You can't do a 10Tfps recording of a lightning strike with it for example because every lightning is different. It only allows you to synthesize an averaged video of repeatable events.
Did you read TFA? It seems that you are describing prior techniques for ultrafast image capture, whereas this technique is for capturing single, non-repeatable events.
> Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones. [...] The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.
From what I understand, it is a camera, so as long as you can record data fast enough. I didn't read any limitation of the current setup across time (there is one across space though). So far, it is only 25 frames captured, it is fundamental research, not yet a product people can get value of.
Note that what we want to observe with those cams are very short, transient phenomenons. When I was doing my internship at a particle accelerator called GANIL, we were only recording 0.5 second, which already represented close to 1TB worth of raw data. It takes months to interpret and analyze results.
That's incredible. How long did it take to write 0.5s of data to disk? I'm guessing there's no way to sustain this as you'd be so far behind after only a single second. I'm pretty sure we can still only store a few gigs per second. Please correct me if I'm wrong. Very interesting though!
The best way to think of this is it might take 100 seconds to 'record' those 10 trillion frames that occur in 1 second.
That doesn't seem to make sense, but imagine this. You want to shoot 100 frames of the first millisecond of an airsoft pellet leaving a gun, but you have a camera that only shoots around 2 frames per second.
Your airsoft gun shoots 1 ball exactly (1ns accuracy) every second, exactly the same velocity and direction.
You have your camera, that only captures 2 frames every second, but this camera has an insane shutter speed, 1 microsecond, and has a shutter with that you can time to the gun exactly.
You can also delay the release of the shutter by 1 microsecond increments.
So, you start by taking 1 picture, 10 microseconds after you shoot your pellet. Then in the next second, 20 microseconds, you do this 100 times. You stitch this all together, and you have a video in super slow motion of an airsoft pellet leaving a gun. It just happens to be 100 different airsoft pellets.
I agree that the article isn't very clear on this, but I believe you're describing the previous work.
> Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones.
The new innovation here actually records the frames right after each other of one single event:
> The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse’s shape, intensity, and angle of inclination.
I'm more curious about how they can store it. even if each frame could somehow be encoded as a single byte, you're looking at ~10TB/s, which afaik exceeds even the bandwidth between L1 cache and the execution core of a modern cpu.
Sorry, I thought you were challenging the notion that it's 10 trillion frames per second because it's not a full second (someone else did that elsethread)
The technique of making "ultrafast" movies of repeating phenomena by imaging "one pixel" at a time and then assembling the raster-scan of all the pixels has been used for a long time.
The technique described in the paper is different from that....
> Thus far, established ultrafast imaging
> techniques either struggle to reach the
> desired exposure time or require repeatable
> measurements. We have developed
> single-shot 10-trillion-frame-per-second
> compressed ultrafast photography (T-CUP),
> which passively captures dynamic events with
> 100-fs frame intervals in a single camera exposure.
I feel like many readers of news carry this misunderstanding, so just to clarify: journalists very often do not get to write their own headlines. Many times they aren't even consulted on the headline. They may be just as surprised and dismayed as a reader/critic.
It is frequently editors who write the headline, and it might even be editors who weren't involved in the reporting. Editors are always writing headlines with a balance of different motives and goals. Strict accuracy isn't always on the top of that list.
That is not the case, from the article that you criticized:
> The first time it was used, the ultrafast camera broke new ground by capturing the temporal focusing of a single femtosecond laser pulse in real time (Fig. 2). This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse's shape, intensity, and angle of inclination.
Obviously they're only recording extremely small timeframes with this setup, but it is indeed real time.
Note that a rate such as "x frames per second" doesn't mean it needs to be able to operate for the full second. That would be like saying you can sprint at "12 miles per hour" means you can run all of 12 miles in an hour.
Watching a 1 second video shot at 10 trillion fps would make for a very boring lifetime or 15. A 240Hz display would take ~1300 years to display all 10,000,000,000,000 frames.
The fact that "statement A does not imply statement B" itself does not mean "statement A implies statement B is false".
Just because the phrase "X can run at 12 miles per hour" does not imply that "X can run 12 miles in an hour", it does not mean the ummonk was claiming that no one can run 12 miles in an hour.
They mentioned that in some of the use cases, the thing they're imaging is so fragile that it can only tolerate one laser pulse. So I don't think you're correct.
There's even an easy analogy for this! Putting strobe lights on fast, repetitively moving machines allows you to see their motion slowed down.
EDIT: I'm responding here to the parent poster's claim, not to the original article. For more information on the technique used in the article, see this other article[0] about "CUP" vs this article about "T-CUP". In CUP they use a random 2D sampling of a scene projected onto a streak camera.
Just getting started... o_O