Hacker News new | past | comments | ask | show | jobs | submit login
The Matrix Is Unreal (fxguide.com)
492 points by vblanco on Dec 12, 2021 | hide | past | favorite | 226 comments



The people trying to point out that this falls short of photorealism are missing the point. This is a tech demo aiming to show off the next generation leap unreal engine 5 is making. There’s lots to be impressed about here and this was made using a ton of procedural generation using a team of just about 70 people.

Here’s a great deep dive into some of the new technologies that highlights the strengths and weaknesses of some of the new engine tools: https://youtu.be/ib6_c6uliLg

This is extremely exciting this is possible on a year old console and not a 3090.


The HN crowd's interest response has been so deadened by constant dopamine flow that they simply cannot be entertained anymore.

"Meh" is the maximum level of emotional response that can be elicited.

The invention of literal time travel, the invention of AGI, or the manufacture of a black hole as an energy source would all be met with mehs. Just a bunch of people dead inside. I still come here for the links and the 5% of comments that are insightful and not pedantic.


Your jaded view of HN may or may not be accurate but I think the general HN audience has already built in their expectation that the advancement of computer graphics will continually get better through incremental improvement not to dissimilar from chip development.

Is this impressive? Yes & kudos to the people who built this. Is it life changing in any meaningful measure? Not really...it isn't a scientific breakthrough or complete paradigm shift of understanding that will the lens which you view your existence? Probably not. And expectations of this already happening were built into how I see the steady march of technology development.

This is an incremental instead of step-function change in expectations and was already a pre-ordained outcome. You can argue that my expectations are to high and maybe this came earlier than expected but this isn't some shock to my system.

I suspect that I am not the only one who feels this way.

And if you are part of the team and just feel like people don't appreciate you work - that isn't the case. It is very impressive but it isn't a eureka moment of excitement (at least to me).

If I misunderstand the significance - please guide me to what I am missing here.


To me, the missing significance is that nothing is pre-ordained. As entrepreneurs, engineers and scientists, we build the future. The future does not build itself. And if you are a trendline oracle, the trajectory has been better looking games that require ever more development effort and ever more hardware. That trend has been extremely stifling to creativity due to the ever increasing financial investments needed. The promises of this new engine are exciting exactly because they break that trend. It runs on existing commodity hardware and achieves much better results than everything released on that hardware so far and with less development effort. Computer graphics are not going to solve anyone’s existential questions, but they are a significant creative tool across many art forms and better tools that require less effort to achieve better results are only a good thing for the creative industry, just like cheap digital cameras and editing software democratized filmmaking.


I don't disagree with you I disagree with the follow-up comment from the previous poster about it being 'meh'.

Nothin is pre-ordainer for sure, that said there certainly seem to be fairly clear directions in technology. I think what was missing from my understanding is that this is a much more efficient use of resources for a better engine. Sounds like a leap forward in graphics engine (now developers will be captured on the Unreal Engine - great business model for the company).

It still isn't a eureka moment that the other poster was really hoping for. Its very impressive but I am not going to swoon over the achievements. That doesn't make me "dead inside" because I can't get fully amped up for this post and I think that the poster missed how people might view look at this development which is why I added my own comments.


I would caution against forming judgement on an entire population based on the loudest minority within them. In my experience there seem to be many more reasonable, restrained, and quiet folk out there. Now if the signal to noise ratio in the comments here is trending towards more noise for you, then by all means the comments are less useful and may not be worth the time at some point. But don't confuse the loud cynical voices as being representative for all of us.


No site will ever be judged by the quality of its lurkers.


Are you implying AGI and time travel is at the same level as improved computer graphics?


No, it was saying that even a fantastic invention would fail to garner enthusiasm.


I think people were pretty blown away by the mRNA vaccines for example.


I dont think neither year old console or a 3090 is any form of "consolation".

Also while i agree on principle with you, this is still a movie, and one of a trilogy that defined a significant part of culture i might add, so it's a risky move at best.

But yeah, besides facial quality(which i think are vastly underwhelming) the movie looks pretty nice, and the rest doesn't even matter when you consider most of movies/shows today are VFX.I remember seeing the first ever UE5 tech demo years ago(i think it predates ray-tracing if i'm not mistaken) and i was kind of surprised.


70 people is rather a lot.


I don’t know, i was kind of blown away by the length of the end credits screen in call of duty. I reckon there were several thousand names listed there.

Fwiw this demo is on another level than call of duty. The world feels huge, extremely busy - there’s cars and people everywhere - and so varied.


> Fwiw this demo is on another level than call of duty. The world feels huge, extremely busy - there’s cars and people everywhere - and so varied.

Indeed, but the truth is it's just a demo, now when you add all the rest that's needed to make a real game you start to have issues that woudn't have with a demo.


I remember reading that the end credits of Red Dead Redemption 2 was something like 7000 people


Red Dead Redemption II (PS4) Credits

7318 people (4132 developers, 3186 thanks)

https://www.mobygames.com/game/playstation-4/red-dead-redemp...


Depends on what they all do I suppose. For a franchise tie in, there were probably a lot of people not doing the actual content production.


For AAA quality? Not that many I think. Sure it's not a game but then production wasn't for as long either.


It’s all relative as Einstein once said.


The real excitement will come when it does not require a new $3000 graphics card to run on the home PC I already spent $1500 on...

The price of keeping up with every new enhancement on PC is directly linked to each new engine update, but somehow game releases can be "optimized" for gameplay on ~3 year old consoles... Also why I haven't bought any new PC games since GTA5.


And that is the super exciting, mind blowing thing about this, this ran on the Xbox Series X/PS5! In fact, it even runs on then Xbox Series S with a small visual downgrade, which is the cheap version with 1/3 the GPU power of the Series X (at 4 tflops, it is perhaps very roughly equivalent to a GTX 1060 from 2016). This is a triumph of software, not hardware. I imagine you should be getting a lot of use still out of your current rig.


> This is a triumph of software, not hardware

Purpose built gaming hardware has significant advantages over a typical PC. Modern consoles have SoC designs with incredible bandwidth between CPU/GPU/RAM that you simply can't have on upgradable form factors.

The closest thing in the general purpose computing world are the M1 family of SoC MacBooks. UMA on the M1 in particular could be harnessed to have physics and graphics geometry in sync with each other without the penalty of bus transport - so things like high-detail deformable environments become much more feasible. But again, even these machines are geared towards general purpose computing, not graphics.


> Modern consoles have SoC designs with incredible bandwidth between CPU/GPU/RAM that you simply can't have on upgradable form factors.

So where's the PC platform with these advantages? I mean, most PC players will replace their whole system instead of upgrade individual components (anecdotal / speaking for myself). Where's the PC-like-a-console?

I guess the Steam Deck will be the one to look out for. But I would also expect gaming laptops to have those advantages.


The PC market has found a local maxima that prevents it from getting there.

You need the CPU, GPU, motherboard, and RAM to be one product shipped by one company, but instead you've got a CPU from Intel, a GPU from AMD/Nvidia, a motherboard from the PC manufacturer, and off the shelf RAM.

"Gamers" want an Intel CPU because of the branding (although this is shifting towards AMD now), they want a particular graphics card because of branding, they may even care about the specific motherboard and/or RAM.

Apple are able to do this console-like integration and performance because their target market don't care enough about each individual component, the Mac is the product and the brand they get behind. PS5 owners aren't buying it because of the brand of CPU.


I don't think it's brand loyalty on the part of end customers, it's just unlikely and difficult for a number of large vendors to coordinate on such a project. You would need motherboard, CPU and GPU designers to take a long bet on a new form factor, while also cutting their bottom line (SoCs require less components and better performance for money)

I think the speed advantage would be one welcomed by the average PC user, and if Intel aren't already colluding to make an answer to M1, they're heading towards extinction


Do you really need that much bandwidth between cpu and gpu? On a pc the the gpu has its own memory and the memory bandwidth of for example a 3070 is greater than the memory bandwidth of Xbox.

Would this demo not run on a pc?

I understand that pc hardware is more expensive but it was also my understanding that performance ceiling is actually much higher than consoles, especially after the next gen of pc hardware comes out after a console release.

It’s hard to tell how much more expensive a pc is because of console manufacturers’ loss leader accounting.


CPU and GPU are good at different tasks, but transferring any amount of data across the PCI bus is pretty expensive, forcing you to use both in a suboptimal way. If you have fast enough data transfer between them then you can do SIMD-like tasks much faster on the GPU, while doing a lot of tasks that naturally involve branching faster on the CPU, using whichever processor is faster and has spare capacity instead of whichever has the data.


This is correct. CPUs are better suited for arbitrary execution (e.g., physics, game logic, networking) whereas the GPU is suited to highly organized data structures and massively parallel computation. Things that require dynamic effects can benefit greatly from reducing dispatch times between GPU/CPU. UMA, as featured on M1 and the PS4, permit almost instantaneous transfer, save for cache coherency issues. The nexus of real time physics solvers and graphics rendering I think is the next step in taking advantage of SoC and UMA


Ok but in practice what can you do on console that you can’t do on pc?


I mean you could run a database or a spreadsheet on a console, but it wouldn't be a good fit. The benefits of a console are the price-to-performance ratio for gaming applications, whereas the trade-offs on PCs are geared toward general purpose computing and browser speed. And as I've explained a few times in other threads, SoC and UMA architectures permit certain types of processing (such as destructible/deformable environments) to become feasibly performant. Ultimately a console process everything as a PC can, and vice versa, it's just consoles are more performance tuned for gaming, and the other has to cater for the lowest common denominator.


One of the advantages of fixed hardware is you can tune for that hardware directly. We used to bake content and design code with specific cache line sizes in mind. If you know you have a fixed memory budget for graphics/sound/etc you can make specific tradeoffs on where you spend that budget. PCs until recently did not have UMA and so you were bound by bus speeds(although this applied to some consoles too but you could exploit it if you knew how, ex: stream audio data from graphics memory and long as you stayed under the total bus bandwidth you got extra memory for the system if you needed it).


> One of the advantages of fixed hardware is you can tune for that hardware directly

With UE5, it's the engine that tunes the rendering quality for the HW available.


I have found I don't update my PC for years but the games just keep looking better. A lot of newer techniques, Raytracing excluded, seem to be much more efficient at making good looking models without much geometry. It's all shaders, decals and image maps. Running a 1080p monitor helped that alot. I recently upgraded to a 1440p, and it has cost me frames no doubt.


> Raytracing excluded

about that - screen space ray tracing has been around a while, and even if you miss little things (no diffusion from lights off screen or behind a shade) the resulting eyecandy is often enough to trick people mind even if it's not physically correct... and three generation older hardware is more than enough to run that.


GPUs are $3000 because of Bitcoin, not Unreal Engine.


The demo runs on consoles.


This is a console demo, not a PC demo (unfortunately or fortunately, however you want to look at it). It runs on the PS5 and Xbox Series X/S which cost 500 USD.


The PS5 is $500 if you can find it, there's been shortages for over a year, the effective price is $900 on Ebay where you can actually get one without waiting.


This is so true, yet so totally besides the point.


I managed to get two, a digital (€400) and a Blu-ray model (€500) a few weeks ago from Carrefour here in [redacted].

It is a bit of a pain having to monitor stock drops but most online stores have a queue system in place as well as purchase limits to try and make life harder for people that buy up every console they have to just put them on eBay at twice the price.

A colleague of my wife managed to get a PS5 in a fnac store a few weeks ago as well. Just went in randomly and asked and they had them in stock. Only a few and they sell out quick so was just luck in that instance but if you put in a little effort you can get one in a week or two (depending on stock allocation of course) without having to pay over the MRSP.


You can get one off r/hardwareswap for MSRP usually and is delivered within a few days


Yeah it's really cool, the problem is it barely runs at 30fps and there is no way in hell it's going to run at 60fps on a console.

I never had a ps4/xbone, skipped the last console cycle and stayed on PC then recently got a PS5 where every title has an option to play at 60fps. There is no way in hell the consoles are going to be able to push this fidelity at that frame rate. Let alone their promise of 120fps that has been floating around.

And the tech like nanite & lumen. It's not as if you can just "turn it off" and pick up a bunch of performance. It seems like you have to develop your game with these tools in mind.

In general I think the new consoles are going to have some trouble, even today games don't render at native 4k, they render much lower then get upsampled, and dynamic resolution is also in place to pick up the slack. The promise of 4k/60fps on a console is kind of smoke and mirrors.


"It's really cool" is severely downplaying the achievement here. It's like you got a 1st gen jetpack for christmas and you're complaining it doesn't have enough horsepower.

The fact that this runs in real time, at 30fps, on a $400 console is mind blowing. It's even able to downscale to a smooth 1080p/30 on the Xbox S. You would not dream of achieving this level of fidelity 5 years ago, there is nothing out there that compares. Nanite & lumen are the reason it performs so well, not the other way around.

On top of that, the promise of 4k/60 on PCs still hasn't been met. You can spend $3-4k and still barely achieve that with a 3090. This engine might be the first to actually get there.


I agree. What a Series X (what we've got, can't talk to PS5 as I haven't seen it) can output is just mind boggling when considering the price.

Sure my AUD $6K PC is faster (with a multi core overclocked Ryzen, and 3070, fast memory, PCIE4 NVME storage, premium case/mobo/power/cooling), but at $750 what you get blows my mind. You'll struggle to match it at 3x the price in a custom built PC, and that was with pre hike GPU prices.


We probably won't get consoles having games of this fidelity on current hardware, but at the very least this shows that this level of fidelity is coming. Next gen consoles and high end PCs will at the very least be able to achieve that. I find that exciting.


Considering that the exact same demo can run on the Series S (1/3 the rendering power) with barely a visual downgrade shows that we will absolutely be hitting these visuals on Series X/PS5 as 60fps and temporally upscaled 4k. 60fps will no doubt be a major design goal of the engine, so even if their current “wow” demos are targeting 30fps, finished games will look quite close to this and absolutely hit 60fps. I agree that this is a marketing demo, but it’s a playable one and does a lot to answer prior skepticism about UE5. As end consumers, we can feel free to remain skeptical till we see the first fished game, but the aim here is to sell the engine to game devs and they are the skeptics that will need to be won over before any consumers will see finished games.


IMO 4k/60fps as a goal is overrated. 3k upscaled to 4k using a newer generation upscaler is not going to look much worse than native 4k, especially when viewed on a TV at a typical distance from one's couch. And 60fps, while crucial in some genres, is not altogether that important in others (particularly narrative-focused single-player titles, for which Unreal Engine is often used especially among non-major publishers).


60fps as a goal is vastly underrated. We should absolutely be aiming for decent framerates.

The 'golden age' arcade games of the early 80s mostly ran at 60fps. And this trend continued in the arcades, with the most impressive early-90s 3D games also choosing framerate over detail, which was absolutely the right decision for Daytona USA, Sega Rally, Ridge Racer, and more. A solid 60fps pretty much defined 'arcade quality'.

It's understandable that early 3D home consoles had to aim a bit lower. But we're now on the 5th generation of 3D home console, and we're still making excuses to run at lower framerates than games of the 80s/90s?

But 4K, that just game too soon. Every recent generation of console has struggled to keep up with the rate of screen resolution increase. And 4K was a huge jump, from the 2 million pixels of 1080p to around 8 million pixels.

We could have been happy with 1080p for a lot longer if we'd had solid 60fps gaming with a bit more antialiasing, and streaming services with higher bitrates.

High-DPI screens are great if you're reading text off them at close range. But not really needed at a normal TV-watching or gaming distance.


4k maybe, but 60fps should be the minimum going forward IMO. Or at the very least a 60fps option in all games.

I think 30fps is a straight up usability issue. The only times I've ever had motion sickness or a headache from games is when the game is 30fps (though not all 30fps games gave me these issues)

I hope the performance/quality modes become standard this generation.


I'm not trying to crap on upsampling, it's going to be critical to get decent fidelity in modern games.

I think you're wrong about frame rate though. I find 30fps unplayable. Also, people have had a taste for it. They have been playing 60fps titles since launch on the next gen consoles, it makes the frame rate all the more strange when you play something running at half that.

Not sure what you are talking about with games using unreal engine. It's also used for shooters, Gears of war is one of the flagship unreal titles. I'm sure the folks at unreal don't want people to think "the engine isn't for shooters".


I'm sure the folks at unreal don't want people to think "the engine isn't for shooters".

Especially considering the engine originated from shooters -- Unreal and Unreal Tournament.


At least some iterations of Gears of War do run at 30fps.


I completely agree and don't get why you're being downvoted. With fast-paced games it's even hard to notice the difference between 1440 and 4k. I'll take 60/90/144 fps over "true 4k" any day.


If I am running a game at under 60 fps or less I probably will assume something is wrong. I feel pretty bad if I sit and stare at a low fps game too long. 4k I do not give a rats ass about however.


This demo was made on the unfinished engine with a team of less than 100 people in less than a year. Once this is the hands of bigger studios with longer schedules and even more optimized I think you'd be surprised.


If AI upscale is undistinguishable for the average gamer I really don't see to problem. I much rather have 4k 120fps using AI upscale then full hd 30fps without it.


I don't think this is using a DLSS technique of AI upsampling, but rather a more traditional algorithmic approach like AMD's FXR uses. I'm not saying upsampling isn't good, it's going to be absolutely critical to get this kind of thing running. I was just saying there is some nuance to the 4k/60fps claim of many titles.


This writeup is unsubtle in being a puff piece. The demoed tech is very impressive! Individual stills taken from the video are pretty convincing.

The motion version definitely still exhibits uncanny valley, though—lights that are too hard on flesh, tightness and sharpness where there should be softness. That I have to resort to hand-wavey descriptors is cool; they’re getting close! But crossing uncanny valley is HARD.

What this demo convinces me of is not that we’re there; we are not. Claiming that we are is disingenuous, and makes me distrust the claimant. But it makes apparent that we will get there. And that’s impressive.


Speaking of motion, the motion of the characters is my biggest gripe. It feels like we are making tons of progress with lighting, textures, and faces, but characters still move with all the fluidity of a stick figure. Rotoscoping still beats the pants off CG here IMO.

I wonder if CG models the vertebrae yet? Tough to motion capture, but a huge part of why humans don't move like stick figures.


I think it's more a case that in this particular tech-demo the animation was a let down. Games have progressed greatly in animation with demos such as this [1] and [2] being good examples. I'll admit course however that animation is one of the larger difficulties when crossing the uncanny valley.

1. https://youtu.be/o-QLSjSSyVk?t=1297

2. https://www.youtube.com/watch?v=i78ds3bJFDE&t=2739s


There are some modern games with very impression animation, for example The Last of Us 2. That is a lot of work and the quality of animations varies a lot between games, even for AAA ones.


I’d argue the opposite - it’s been clear since the early aughts we’d get there. What’s impressive about this demo is specifically the leap it makes towards that. I wouldn’t have expected real-time graphics on consoles to be this close for another generation.


Careful, real time demos have been made before and have turned out to be anything but.

I d wait until it s in my hand and I touch the tv trying to open the window for Keanu before I get excited :D


Lots of people have played with those graphics in real time. There are some tricks behind it, like they put a whole games worth of asset weight into a tiny demo, but the demos they run are still fully playable.


It the demo video (play through) they say it was rendered in real time right at the beginning. Also down in the corner which console its being rendered on,shows up from time to time. so I’m assuming it’s not faked.

It looks quite good. Not reality but the cityscape is pretty close..

The fun part is the end where they show the some of the secret sauce of how it was rendered.

https://youtu.be/WU0gvPcc3jQ


Keep in mind that it’s running in real-time on a gaming console which is on the market for a year already. That makes it very impressive imo.


This is running on current generation consoles, imagine what the next generation will look like.


Thanks to HN, I checked it out on my PS5 tonight. Very impressive. The thing that immediately jumped out at me was the 24fps target. The demo has a cinematic feeling, and thus the 24fps framerate feels “correct.” Compared to 60fps, you can do a lot more when you have 2.5x as much time per frame.

The first part is a CGI representation of Keanu and Carrie-Anne. I was shocked at the quality; it had “the demoscene feel,” if you know what I mean - “I did not know the hardware could do that.” Artists going full OCD, just because it was the right thing to do. Great work.

Parts of it had that CGI feeling, but most of the first two minutes, I swear I was watching video and not CG.

Part 2: the city tour. I am not sure if there’s a detailed YouTube video of same, but the artists used some clever tricks, as any good 3D artist would. You can fly around and peer into offices and apartments through their windows. There are a limited number of static rooms, and they use frosted glass to blur the details. Interiors are fully modeled and there is parallax through the glass. HVAC units on rooftops are robust, down to the warning messages. Great use of LOD.

I saw Z-fighting and LOD “popping” when surveying the city from on high.

I walked away from this (um, got up from the sofa) thinking Epic’s gambit is to push Unreal Engine 5 as a virtual movie set.

They went all out on the character models in the first part, with Keanu and Carrie-Anne. There were a few uncanny valley shots (“dead eyes”) but very impressive overall. Some of the shots could be taken as a reference, or gold standard.

Even more impressive is that it’s running on a (loss leader) $600 Sony PS5.


> The thing that immediately jumped out at me was the 24fps target

The demo runs capped at 24fps? That has to feel almost unplayable in an interactive setting? A dip below 30 in an interactive first person perspective is usually pretty disturbing.


The intro scenes are capped at 24fps. The interactive segments run at 30fps with dips. Digital Foundry discussed it here - https://youtu.be/ib6_c6uliLg?t=1985


24fps is during cinematics, the gameplay segments are 30fps.

PS5 is 499 btw and there's even a 399 model (digital edition).


> I walked away from this (um, got up from the sofa) thinking Epic’s gambit is to push Unreal Engine 5 as a virtual movie set

I think they have been doing this for quite some time already:

https://www.youtube.com/watch?v=gUnxzVOs3rk



Here is a link to the demo on youtube https://www.youtube.com/watch?v=WU0gvPcc3jQ for those who haven't got a box PS5/Xbox


I find myself extremely bothered by the artifacts from temporal super resolution. It's not blurry like temporal anti aliasing was but instead it's like anything fast moving has a detailed shimmer around it. I felt similar problems when DLSS was cranked way up in demos (and based on the blockiness of the unfiltered portion of the demo this is pretty decently cranked up as well). Some of the particle lighting effects aren't handled too well by it and end up look like streaking pixels.

Overall very good graphical quality demo, would love to see this on a maxed out PC without so much scaling/a higher framerate though.


That's a limitation. Temporal supersampling, both DLSS v2+ and others, work by sparsely sampling the scene and combining jittered offsets over multiple frames, but in fast moving stuff with big foreground/parallax elements there are large sections of the screen that have only been disoccluded for a single frame, so you only have lower resolution data to work with there, and it is heavily undersampled so needs blur to cover up what otherwise would show extreme aliasing.


I found it particularly distracting when looking at the player's hair. The background pixels around her hair are a complete disaster. There's a definite halo where the pixels being drawn are nonsensical.


Played it on my friend's Xbox One Series X Saturday night, seriously impressive stuff - even if the framerate is a bit stuttery at times and the driving controls are awful.

Disappointing to see there's no PC version (yet) although I hear it's in the works. Would love to put my RTX 3080 to the test.


Looks like a video game... with good graphics.


I initially was freaking out when it was really Keanu but when it switched to its CG character, I just knew it wasn’t him. Then during the action scene, the explosions were still like the quality I have seen in GTA V about 8 years ago. The cars looked fine but the way they rolled after exploding did not convince me personally. Perhaps that is why this promo is a great one for the Matrix franchise as the world of the Matrix looks very real but isn’t. The original movie Matrix has aged like fine wine and is still a perfect metaphor for today’s world. Especially with how video games have become increasingly realistic due to technologies like RTX and VR/AR apps and games,but they still don’t fool anyone that they are real. Here’s a great article I had read once which encapsulates what I am saying much better than I ever could. https://arstechnica.com/gaming/2012/02/how-close-are-we-to-t...


Most of the time… sometimes with not so good graphics (e.g. the explosions and the characters in the car and…)


Meh, these guys are still stuck in the uncanny valley.

Objects are often absolutely beautiful and realistic, but interactions between objects are ridiculous. Some of this is probably just "not enough processing power", but it is also due to where the rendering bumps up against the primitive controls. For example, you have a very realistic human. But then the human walks against an object, and she does this standard video game 0 mph duck-walk. Part of the problem here is that real humans don't bash into walls. So how do you realistically represent that, when in fact it's just Johnny pressing forward on his controller?


I feel like you could probably realistically represent "people don't just walk into walls" by having invisible walls around everything, at a sensible offset, and do some other tweaking - if I guide a character down a corridor at a weird angle, they should just sort of casually turn to mostly align with the direction of the corridor. There are probably many, many special cases that need to be exceptions to this and figuring out the Right Thing to do in each of them may be an insanely complex task.

I seem to recall the game "Bound", where you guided a low-poly ballerina through fairly abstract levels, had a really nice solution to "people don't bash into walls": if you tried to guide your character into a wall, she'd just start doing some stretches against it, which feels pretty accurate given how I tend to behave when I'm attending dance classes regularly. :)

Other characters might do different things; a mopey teen might slouch against the wall, a gun-haver might press against it and peer to the side, looking for cover, etc.

Obvious reasons not to do this start with "oh great, one more thing to add to the character state machine and debug" and "oh great, a few more animations to fit into the budget". Maybe also "it stops feeling like you're controlling the character". Someone whose day job is video games probably has at least a dozen more reasons, possibly even stories from when they tried it and abandoned it.


https://youtu.be/LNidsMesxSE?t=674

It's definitely possible to "not bash into walls"

You don't always need to have bespoke animations- sometimes keyframes and realistic constraints can get you there. I think Unity is working on a framework for natural human animation.


> https://youtu.be/LNidsMesxSE?t=674

David Rosen pulled off quite a few little masterpieces -- his gamedev competition entries always had fantastically unique aspects. I cannot believe "Black Shades" was almost 20 years ago! (I have no affiliation, but I am glad to see https://www.wolfire.com/games is still alive and kicking!)

Lugaru and later Overgrowth have been the outcomes of what can only be called a decades long obsession with getting the player controls just right, allowing very natural and responsive control and movement, through creating emergent behaviors from first principles, resulting in a rich and natural interaction with the environment (and other characters, see the fighting mojo), in contrast to the many, many AAA titles that just constantly break the immersion through unnatural, jarring behaviors and frustrating limitations.


Overgrowth was such an enticing game to play. It didn't have AAA graphics, but its physics were so much more realistic than the games I'm used to that I was sucked in, and it left a strong impression on me. I wish more games were doing procedural animation, and physics-based damage.


This is a fundamental gameplay issue to some extent though: you always wind up making a trade off between responding to the player and maintaining animation fidelity, and since controllers aren't a human body with a nervous system you're connected to, it's always going to be a problem.

It's significant too: nothing frustrates players quicker then controls feeling "unresponsive" by doing something that's different to what they instructed.


> nothing frustrates players quicker then controls feeling "unresponsive" by doing something that's different to what they instructed.

Which is what some game does fundamentally bad. When using skills in some game. Your action got locked completely and won't respond to your request until animation finishes. (And when the mob hit you in the middle, you can't do anything because animation is playing.)

But some games makes more efforts into these and allow you to transition to another action in the middle of one, blend them seamlessly or even play them at same time. That makes the game feels much more responsiveness and enjoyable. Feels you are actually moving your body instead a online character with high action latency.


> the human walks against an object, and she does this standard video game 0 mph duck-walk

Is that really part of what uncanny valley represents now? I feel like we're moving the goal posts. Fifteen years ago the characters really could be uncanny and disturbing. I don't get that anymore. I feel like it's very close to real, but not real. However, it's in no way off-putting or weirdly creepy.


I get the uncanney-valley feel just from the face stills.

Watching the demo, I find it very impressive. Still in the valley, they key word being "still" - I think they managed to get past the bottom and are now climbing up.


Me too. And my wife just walked past and saw one of them and asked “What is that?!?” We’re still deep in the uncanny valley for live video game content.


I agree with you. I think they have made significant progress on the valley, and I personally feel, it's not something that moves. The uncanny valley was a particular place the technologies came to, where people specifically, were reminiscent of a talking mannequin, with the dead eyes of a shark. I feel like a lot of that is gone, but when watching the fully-rendered car chase, the overall impression was simply of a really good game, so progress, but still not reality.


uncanny valley naturally changes goal posts. It has for the past 60 years


Our perceptions are like an adversarial network -- we are always getting better at spotting the simulation, so simulations have to keep improving to try to fool our perceptions.


Yeah having actually played it you run into this immediately. NPC cars will deform in a crash, but a street lamp will bring your car to an immediate stop.

That’s far from an unsolvable problem though, and this is a graphical tech demo, not a full game with the associated staff, budget, time, etc.

The Demon’s Souls remake (which I think actually looks nicer) has tons of destructible boxes, tables, stone work, etc (the original had much of this as well). I’m more interested high graphical fidelity being used in these smaller, authored spaces than in massive, often empty open worlds.


Open worlds are disappointing me for that reason; they quickly collapse into an empty one.

Filling up a small map with “real” content by hand is hard, doing it on a huge scale is impossible.


> Meh, these guys are still stuck in the uncanny valley.

Still impressive technically and visually. Let's not forget that these things are running in realtime, they aren't a video. the whole "uncanny valley" thing is exaggerated. in the next 5 years you won't be able to tell the difference. In fact, you're brain will be constantly asking itself whether what you see is real or a 3D model. And I bet it did if you played the demo. That's a serious leap forward.

Now most videogames will still featured heavily stylized/cartoonish models, because some people are afraid of models being too realistic.


>In fact, you're brain will be constantly asking itself whether what you see is real or a 3D model.

I get this in movies. Sometimes I even get this in scenes where they're using real actors and not CGI. Harsh lighting with make up can make humans look somewhat fake to me. One of the Mission Impossible movies gave me this effect.


Alone watching this video was surprisingly weird.

When watching it I realized what you just wrote.

This is really impressive.


I'm a game dev and I spent a lot of time trying to merge the physics system with the player character (PC) control. You basically have the options of letting the player controls try to physically manipulate the PC by inserting forces, or skipping physics entirely for the PC and implement something separate ad hoc.

I spent a lot of time on both and read a lot about the issue, and most end up skipping physics on the PC, because it just feels worse controlling it realistically. This might be because we're used to that though. Anyway after all, you're making a game and games have to be fun to play.


I used to be a game dev, and the goal always was to create an illusion; physics and whatnot was actually used primarily in order to save the effort, and I, too, thought that quality suffered and things would come out less fun that way.


I don't get the complaints. Sure, plenty of work to go before the avatars look completely natural, but can't we take a moment to appreciate that you can drive a car down the highway at 100+ mph with zero pop in, real time ray traced reflections and global illumination in a scene lit only by the sun, sky and emissive textures? With hundreds of cars and pedestrians moving about? On commodity hardware no less?

This is what blew my mind the most with this demo, and I think will enable some crazy cool evolution in open world gaming.


About pop in, at 3:10 in Real Civil Engineer's video you can see a car pop in directly in the line of fire. It's a shame to point out tiny flaws like this but they are nonzero. https://youtube.com/watch?v=aDD4nSlh2BM&t=185


This is probably individual and mileage varies by platform, planet alignment and what not, but I never noticed anything like this when playing the demo. Doesn't mean there wasn't any pop-in of course – well, certainly none as egregious as the linked video at least, pretty sure I would've noticed that – but I don't recall any whatsoever, either in the action sequence or subsequent city exploration.


It might’ve been because your attention were fully utilized.

Like in that video where you had to count how many times people passed the ball, but failed to notice a gorilla walked through the scene.


I've run the demo on my Xbox Series X.

Let me first state that its amazing overall, very impressive.

Having said that, it exhibits pretty noticeable pop-in, also quite a few instances of what looks like garden variety z-fighting of textures/shaders in some places.

In a way these actually jump out at you a bit more in this demo than they do in less photorealistic rendered environments probably for reasons similar to the uncanny valley phenomenon being discussed throughout this thread.


I saw none of this when running on PS5 and I navigated the city for quite a while too. Maybe I just got lucky, or maybe mileage varies across platforms, I dunno. Very impressive anyhow.


I think it's mostly about where the team puts their attention and how to make the best use of the finite production time and budget. Such demos (or games in general) are like fractals, an infinite amount of people can spend an infinite amount of time getting every little detail right and still not cover everything.

E.g. the goal was probably to show off the new Nanite and Lumen rendering features, and probably MetaHuman for the main character faces and facial animations. At some point you have to say "ok, we focus on this stuff, and accept compromises for other things". If the team's goal would have been to create the best crowd simulation or environment interaction ever, they would probably have achieved this, but at the cost of other things.


There's not a whole lot of good physics libraries to run on GPUs just yet, and lots of the physics you would want can absolutely eat up 16-64 cores of a CPU to get a somewhat decent frame rate with high realism.

Similar to ray-tracing, where you get great realism, but it is just an absolute resource hog. The number of vertices in soft-body physics animations come to mind. The real world has curves and deformations, and hair, with millions of strands, that each need tens or hundreds of physics-enabled vertices to look ultra realistic. Walking through a city, you might have 100 NPCs on the screen at once, who all have hair.

You simply need so much freaking compute power to pull this stuff off, I can't wait to see what it looks like in 10-20 years.


> Walking through a city, you might have 100 NPCs on the screen at once, who all have hair.

This is why I'm most excited for high-core counts leading to game development that allows these kinds of things to run in their own hardware threads. Imagine a world where each of those 100 NPCs has hair simulated on one thread, clothing simulated in another, world interaction on another, even AI generation on its own thread.

It's crazy just looking 10-20 years back and seeing where we are today. Compare this tech demo to Skyrim (2011) or GTA III (2001) and it's just incredible that we're looking at photorealistic, ray-traced, real-time "gameplay" like this on a home video game console. Or just compare GPU specs, the GeForce3 (2001), GTX 580 (2011), and RTX 3090 (2021) all have texture fill rates of 1.92, 49.41, and 556.0 GTexel/s respectively, with similar increases in raw floating-point performance. The future is exciting.


The speedup is slowing down though.


But imagine if all these calculations for multiple objects were made on a server, in a shared persistent world that could then be projected onto particular views for each final user connected to the system.

Oh, wait...


The city feel is perfect though. It is just the interactions/movements of some of the characters.

Also the acceleration of people and cars feels still too artificial. It is not the looks, but the physics that are still full on the uncanny valley.

I think as graphics get more and more realistic looks wise, game designers need to slow down the game, and make it feel more real.

One of my favorite car chases is from the movie Ronin. It feels extremely real, like you are there, as they didn't use any outlandish effects, but kept it grounded on how a real car chase would feel.

https://youtu.be/2m-ofGDLNlM?t=61

https://www.youtube.com/watch?v=F0_UqwsYoGY


Spot on about the physics. For me, the thing that really gives it away in the white room is Keanu's hair. The way it moves it just off.

Super impressive demo though.


Also body language. Especially Keanu's right hand/arm.


To be fair, the Ronin car chases (and a lot of similar) exploit camera angles and FOVs to make a slow moving car feel fast. So they are not as realistic as you think either..


> But then the human walks against an object, and she does this standard video game 0 mph duck-walk

Who cares? It's a video game, that's how they work. I'd rather have developers work on making things fun than trying to make a creepy realistic Keanu not walk into walls


A lot of newer, fancier cg in movies and games is deep in the uncanny valley for me. I can’t watch/play for more than 15 minutes without realizing I’m in a game that doesn’t look right, it’s distracting.

I suppose the tech can’t get across the valley without first passing through it


No they are just getting cheaper with their CGI...not better, i would bet that if someone would invest the money and time as they did in the 90-00 we would be absolutely astonished.


what? Avatar is the most money ever spent making a film in history..


> A lot of newer, fancier cg in movies and games is deep in the uncanny valley for me.

There's a lot of uncanny CGI in modern movies. There's also a lot of CGI that is so perfectly realistic you have no idea it isn't real. This is usually more environmental and less characters.

If the tech gets past the uncanny valley then you'll never know. This is basically true by definition. :)


> If the tech gets past the uncanny valley then you'll never know. This is basically true by definition.

No they’ll tell me

(And I’ll probably not agree, but if I do they’re still going to tell me, trust me they haven’t stopped telling me)


Watching the demo I thought the biggest part of the uncanny valley aspect of the characters was the lack of spine flexion. Every character walks like they're having lumbar spine spasms.

Aside from the motion though they look incredible.


I thought you were going to make a different point about interactions. On pretty much any CGI I've seen, contact with skin doesn't look real. It could be skin on skin, or skin against a surface like a wall. It always looks like two objects not really touching each other. A lot of CGI (like Pixar etc.) seem to prefer hairy animals or solid robots, presumably for this reason. It seems easy to get hair to look OK, maybe because there's enough noise in the interaction to distract us. But skin just never looks real.


To be fair, old~ Keanu Reeves made my brain hurt .. because I was lost and not sure if there was a shift between model and live footage at some point. Only the gait gave it a too unnatural aspect.


> Part of the problem here is that real humans don't bash into walls. So how do you realistically represent that, when in fact it's just Johnny pressing forward on his controller?

You could animate that as hugging the wall, leaning against it or something, instead of a 0mph walk.


Isn't that what Super Mario 64 does? At acute angles you hug the wall and shuffle along it, and at right angles, you push against the wall.


Hah, nice catch! Ye he does. If I remember correctly, he also like falls over if you run into a wall, or was it jump into it?

It is hard to stress how good and impressing that game was at the time.


As you say the uncanny valley really shows through when the physics doesn't look right. For me the worst uncanny valley offenders were the agents, e.g the ones that leap forward through the air but their suit doesn't so much as flutter. The slow motion effect seems to be used here to cover for the lack of animation. In terms of effects the fire and explosion effects had obvious clipping as objects pass through them where a realistic, but more taxing, effect would utilise particles.

At this stage I feel like the engine is very good and sure processing power is still a factor, but I feel like the biggest factor is cost/time. Small details that make something look natural are sure to go when there is budget or deadline.


> Part of the problem here is that real humans don't bash into walls. So how do you realistically represent that, when in fact it's just Johnny pressing forward on his controller?

That doesn't seem like much of a theoretical obstacle to me. The primitive controls only signal intent - you don't have to animate the human shuffling against the wall, you can just not move. The controller should be like whispering into the avatar's ear, not QWOP.

Or if you do want to grant the player lower-level physical control, you can have the human actually collide realistically with the wall. After all, people do bash into walls, just not commonly. You can even make them bump their head, and take damage.


The problem in animations is quick responsiveness vs slow realism. If someone made a game with super realistic animations, it would probably be a niche exploration type game like what Team Ico used to do.


Movement is usually more in service to game design goals than realism but it's a solved problem with many different approaches from Mirror's Edge to Death Stranding


A lot of the older Keanu isn't CG, it's filmed. They switch to the CG versions later. This confused me a lot as well, as I was totally blown away with the first Keanu I saw.

"Even now people are confused, having watched the footage several times. “The first time you see the older Keanu that was a video shoot that we (Epic) filmed of him,” Pete explains. “After they come back after Morpheus that was all the CG versions. It was color corrected to match and look the same.” There is also one point where we see footage in the mirror this was real film footage that was comped in UE5. The first shot of Morpheus is also not CG, but then becomes CG when the young Thomas Anderson (Neo) appears. Even the desk scene at the beginning when Anderson is asleep was a real-time recreation of a scene from the original film."


I think the most impressive feat of the demo is making people think some of the filmed parts were "obvious CG".


The model for Keanu looks spot on.

Carrie Anne Moss, who on-camera looks stunning for her age....looks like she aged at least twenty years and her head was stretched, almost like they got the aspect ratio settings wrong. She's almost unrecognizable.

When they're in the car and the camera pans back to "IO" in the back seat, I felt like I was watching Who Framed Neo Rabbit.


Neural networks may be the solution to character-world interactions:

https://www.youtube.com/watch?v=wlndIQHtiFw


Interactions can be really hard to get right, see doors [1]

1. https://www.youtube.com/watch?v=AYEWsLdLmcc


I quite like door interactions in VR - specifically Half-Life: Alyx


Yeah, they look to much like mannequins in the YouTube trailer video.


Why is the original Jurassic Park (1993) still some of the best CGI to date?


Jurassic Park used a lot of animatronics. Some shots you might think are cgi (and probably would be in most modern movies) might not be.

Edit: here is a short video about the full size animatronic T. rex. https://youtu.be/fFTsYGgdR9k


It’s used thoughtfully with an awareness of the limitations of special effects. The actual movie is good and exciting so there is no attempt to use CG as a bandaid on otherwise dull action sequences, “fix it in post”.


Maybe because no one has ever seen living dinosaurs? Just a guess.


This is a unix system. I know this!



Agreed on the uncanny valley. That said, it seems like some research coming out about AI-generated movement looks promising.

https://www.youtube.com/watch?v=t33jvL7ftd4


Boston robot people seem to have thatfigured out


I'm surprised at the amount of comments with a jaded tone. This is one the most impressive tech demos I've seen running on real time, and I watched Second Reality by Future Crew back in the early 90s when it came out.

The car chase sequence is mind blowing and then you get to free roam the city by foot, car or even drone. Nanite and lumen are something special.


Tech demos are just that: demos. Very few, if any, studios will release a game on that engine with such fidelity. And given the current practice of AAA studios now starting to charge $70 for base games and still releasing them as a broken mess, I doubt we'll see any game be as remotely exciting as the demo.


Yes, the set piece is very impressive, but thats the upside of running on rails, the devs have complete control over almost everything. When you're dumped in the world on your own the environment is very high quality, but as soon as you start doing the kinds of things you would in a normal openworld game like this is presented the framerate quickly tanks to the low double digits. Try driving into a line of cars for instance.

The number of cars around the city is very impressive, until you realize a lot of them are non-interactive static props. It's a tech demo, and I doubt the game itself will reflect much of what you see here.


This isn't going to be a game. They wanted to show UE5 working on current gen consoles and showcase the tooling, for example the city was built procedurally with Houdini.

Even the free roaming is impressive, the whole scene is lit by the sun. They recognized there are still things to work out like the car collisions that you mentioned, all the cars are rendered with nanite until you collide with them, at that point they are being swapped to "traditional" models.

I mean for a tech demo it even has way more consistent pedestrian and vehicle simulation consistency even a way long distances than AAA games like Cyberpunk 2077.


I think the demo could have been more effective if the non-realtime elements were indicated more clearly; as-is people got confused what was realtime and what was just pre-recorded. I suppose that was kinda the point, but without any sort of reveal the effect gets lost. I don't know if the console demo had the option to switch to "nanite view" at any point, or just in the "gameplay" section, if it does then I guess that would help with the situation.


Ok, I was under the impression that the white room scenes were a mix of filmed and digital assets, not 100% renders. In this context it is very impressive. The later scenes are very good, but in as others put it still in that uncanny valley area. We are on the up-slope however!


That 3d Keanu in the white room had me fooled, the hair, the skin, the clothes are bang on. I read it did "4d scanning". So am I to assume the hair and clothes motion capture are part of the scan?


The white room scenes are in fact a mix of video and real time rendering. It flips back and forth 4 times.


> The technical demo also puts previously showcased UE5 features Nanite and Lumen on display. After the interactive chase sequence, users can roam the dense, open-world city environment using UE5’s virtualized micro polygon geometry system which provides a massive seven thousand buildings made of thousands of modular pieces, 45,073 parked cars (of which 38,146 are drivable), over 260 km of roads, 512 km of the sidewalk, 1,248 intersections, 27,848 lamp posts, and 12,422 manholes in a virtual world that is slightly larger than the real downtown LA. Additionally, as one drives or walks around the city, you can pass 35,000 metahumans going about their everyday business.

That is some amazing tech! It actually feels like Unreal Engine would be really well suited for other genres of games, not just the historically prevalent FPSes, such as creating RTS games or maybe even something like Cities: Skylines, in which being able to create large detailed urban environments could be amazing!


Real time de-ageing + Zoom + remote job = less age discrimination. Have it randomize other characteristics during job interviews to remove other forms of discrimination too.


Or just make it audio only. Hell, some companies are using writing samples to decide to hire, such as Gumroad I believe. I think that's good because you get to take your time and there's significantly less bias than with video and audio.


You can opt to use no web cam or a virtual webcam with a victor avatar already. There's no need to "deage" yourself when you can be a cute anime cat girl.


victor -> virtual


We need very strong privacy restrictions. No sharing of personal pictures without express consent. Basically every human being can choose which persona to expose in different situations and the links between them should only be evident to them (and their lawful delegates).


A Scanner Darkly future is coming!


The faces especially Trinity looks underwhelming, comparing to real life. Like a big gap. I’ve seen better facial expressions I.e Last of Us 2 but with inferior graphics.


The animation on their faces does look a little off -- I wonder if the higher quality models means they need to do even better on the animation to stay out of the uncanny valley.


Re: TLoU2, maybe it's easier to create a completely original character and make them convincing, than to match a very well-known actor?


It’s notable that they mention the problems de-aging Keanu which really contrasts to the “Keanu is immortal” meme from a few years back. I guess age finally caught up to him.


Gray hair is finally showing up with the beard, which definitely ages him.


Keanu looked insanely good at the beginning of the white room scene, but then I realized it was real footage when they switched to the uncanny-valley-esque version at 1:41…

https://www.youtube.com/watch?v=WU0gvPcc3jQ


Right. The team behind this is being a bit dicey with the truth. They never seem to be clear which parts are CG and which are video. Any time Keanu looks goofy, that's CG. It looks hand-animated to me because the movement is so terribly poor. Mo-cap to me is still shitty (see e.g. Thanos in Avengers), but hand-animated is always uncanny.


The article seems to suggest that's all real-time rendering though?


Even the scene from the original Matrix shown on the TV?


This is the actual movie 'wake up neo' scene: https://www.youtube.com/watch?v=sjoad6gcRzs


I’m talking about what’s shown on the CRT TV in the white room.


I sometimes think if an article like this got done with actual images of people while claiming it was CG, there would still be a lot of, "I could tell immediately it was CG". "It is so terrible", and "The faces look so bad" etc.


Since I didn’t see it elsewhere, here’s a link to the actual video being discussed: https://youtube.com/watch?v=WU0gvPcc3jQ


A better link might be the press release from Unreal themselves [1].

The Verge also did an interview with Keanu Reeves and Carrie-Anne Moss on the process of creating the demo which is quite interesting to watch, too [2].

[1] https://www.unrealengine.com/en-US/blog/introducing-the-matr...

[2] https://www.youtube.com/watch?v=0OK80eljWrs


It's just a (playable) tech demo, alright. We'll have to wait 7 to 10 years until we get games close to this anyway. Remember the "Samaritan" UE3 tech demo? That thing is 11 years old by now [1] and the closest we have to that is probably CP2077 ... and only if you ignore the console ports, yuck.

[1] I am aware of the fact that "back in the day" they used three 580s in SLI or something crazy to run it, but that performance was already surpassed two years later by single card setups.


I think what I would like to note is the appropriateness in Unreal demonstrating this with Matrix. An engine and an film from the same era (if I remember, the the first full release of the engine was the year before the film).

For the technology itself, it is incredibly impressive but more than anything and something which is hard to demonstrate with a demo, is how easy it is to actually use. Nanite models are simple to make from 3d objects, this with photogrammetry would be a dream.


I believe the current CTO of Epic Games was in charge of bullet time for the first Matrix.


Tech demo aside, the city model is remarkably detailed. When we went exploring different alleys and flying over buildings we couldn’t find a “forgotten” corner. Does anyone know what the asset for the city is from or will be used for? The thing is massive and seems to be bespoke art.


It is largely built from Epic’s huge library of photogrammetry assets. This is open for any dev to use.


The most impressive thing is the scale of the city and amount of traffic. In other games you feel that buildings are way too small and every object is placed by hand for convenience to a player. This tech demo nailed the feeling of a livable metropolis.


I know there are companies using virtual models to showcase clothing. Is replacing actors and actresses next? In some ways I don’t mind it, and decentralizing away from celebrity culture may even be a societal good. But I feel like some experiences are better off organic. Is there any point to appreciating - as an example - ballet performed by a virtual dancer with unrealistic athleticism? Will sports matter? Should they? I am not sure how things will play out but I expect all these norms to be challenged in the next 50 years.


Whatever… It’s just you’ll never be even expected to trust your own eyes anymore, unless you peer right into the real world.


Define real ...


The world that seems to be the realest one, to the extent that if I try to dig any deeper I would inconvenience myself by getting put back in the mental hospital.

Pragmatism dictates that, even if reality is a simulation, who cares, it's real enough for now.


Virtual celebrities are treated the same as real ones, branding is highly valuable. Check out vtubers.


Doubt it. Miku Hatsune is 30 and still hasn't outgrown her reputation of just being a default asset.

Why would data, which can be replicated, isn't limited to one body, and doesn't care if it lives or dies, be anything like a human on the inside?


Those are just people with virtual avatars. Sorry to burst your bubble.


I spent 30 minutes watching YouTube footage with my wife yesterday. It is absolutely incredible.

I suspect they used this technology to make the movie, or parts of it. Why else would this be made?


Unreal Engine and Unity Engine are both being used in filmmaking these days. So it's very likely.

For example: UE was used a lot for The Mandalorian (https://arstechnica.com/gaming/2020/02/the-mandalorian-was-s...).


Fun fact - LazyTown, the meme-able (You are a pirate/We are number 1) kids TV show from Iceland used Unreal Engine 3 to generate the backdrops.

https://en.wikipedia.org/wiki/LazyTown#History_and_productio...


Please link to some of the vids you recommend?


I think any random video would do


The rendered actors look so close to reality but at the same time so obviously artificial...something is just off the mark. How would a cgi expert put it into words?


Not a CGI expert (although I can make a good looking cube in Blender).

"Uncanny valley" is the usual term - https://spectrum.ieee.org/what-is-the-uncanny-valley


A lot of it is soft light scattering. Skin is semi translucent and that is really expensive to try to simulate.


I can see what you mean. But at the same time, if I wasn't aware this was fake, I could watch a whole video like this without being tipped off that it's entirely rendered.


Matrix and Vector are also characters in Unreal Tournament 1


50 years separates that from this: https://www.youtube.com/watch?v=fiShX2pTz9A Fifty years! Just Half a Century! It's nothing, nada, niente, 0! compared to the scale of universe. The Idea of a simulation isn't absurd if we take this progress in consideration.


This is super impressive and a huge leap toward photo realism. The draw distance and content streaming using nanite is nuts.

Next I want to see what kind of tech demo they can make run smoothly on a mobile device at 120Hz. I.e. does the tech scale downward too? Or is there no point to it unless you require extreme scale and fidelity.


I have not seen any info on this, but I did see that they are moving Fortnite to UE5. Since that game needs to run on a huge range of hardware, including on mobile, I think we can deduce that a large range of performance scaling is going to be a primary design goal of the engine.


I just watched YongYea on youtube showing the Tech Demo on PS5 and I was in awe. Being a PS user for a long time the graphics of this is amazing. I can't wait till this kind of graphics hit the VR space. So much cool stuff is going to be happening in terms of games and how we perceived reality.


Deepfakes might not even be relevant soon.


I wonder if one day there will be companies that will just sell iper realistic models of complete cities or popular zone to be used in games or other kind of applications.

Like companies such as Tele Atlas did back in the days for map data (when they didn't have much more than that I guess).


There are already, and Epic has acquired multiple companies in that segment.

https://www.gamesindustry.biz/articles/2019-11-12-epic-acqui...

Edit: well not of entire cities, more about 3D assets. The city itself was built procedurally using Houdini.


At what level of reality does it become a warcrime to turn it off?


Irrelevant in terms of appearances, I think? It’s not a crime to delete a captured video of a person. That discussion is more about underlaying AI, which isn’t involved in this demo.


Maybe when you have actual consciousnesses hooked up to it that would die without it.


“Actual consciousnesses” defined as…?


It's still different. Humans can't be turned back on easily, and they can't be replaced once lost.


Does anyone else kinda feel like videogames look good enough at this point? This demo looks fantastic, but I don't think many visual improvements beyond it seem super exciting to me. I'll take them when they come, of course, and I'm sure that realistic 3D worlds in videogames will continue to look better and better as time goes on.

But I dunno. I hope advances in videogame engines do more than just help the biggest AAA titles improve their photorealism.

What do we get as the technology improves beyond the point shown in this demo? Just better graphics, or a democratization of the production of photorealistic games? Does this somehow open the door for smaller teams to create games that involve photorealism, or for more budget and staffing resources in the development of photorealistic games to be directed toward other aspects of the game?


> Does anyone else kinda feel like videogames look good enough at this point?

Yes, definitely. I kinda wish more resources would be diverted to improved physics, better procedural generation, and especially more realistic AI. When do we get to play a game against AI that is learning from the player and constantly changing its behavior in a truly dynamic way? Or how about a game where you can move and interact with every physical object you see instead of everything being a bolted down prop except for that jarringly different item that you're obviously intended to interact with? That would be orders of magnitude more exciting to me than hyper realistic hair on the player models.


You got your wish - a big part of the Matrix demo is the "Mass" AI system (some sort of new ECS system designed to optimize dcache hits), and the city is procedurally generated in Houdini which is a sort of visual node/graph based programming language.


That was actually my first reaction when I saw this video. The graphics are incredible, sure. The people don't entirely escape uncanny valley, but they partially do, and everything else looks awesome. No question.

Then we get to the game portion of the demo, and... it's a half-step above a quicktime event. Now, look, yes, I get it, this isn't actually a game, and the game is an afterthought and not the main point. (By that, I mean the game itself, not the cars, not the agents jumping around, but just the underlying mechanics.) Still, consider where we've come from. Think Atari 2600 graphics and the sheer staggering number of orders of magnitude improvement in the graphics since then. Even if they did have tons of people working on this, all those people were working at levels of efficiency undreamed of in the 2600 era, by almost any objective measure you can imagine. (Even though we'd never be able to agree on one.)

Yet the game itself used in this demo is basically something you could fit on the 2600. I mean, the game itself, the mechanics. Obviously not the graphics, or even a facsimile thereof, but just the game's mechanics. It's still "point at the thing you want to blow up". No generalized conversation engine. No generalized music engine. If you'd like to be generous, call it an order of magnitude better than the 2600 could handle.

The gulf between the graphics and the games we have with them keep getting larger and larger, and it doesn't help that in a lot of ways the AAA games have been boxed into what the engines do. It's not just that "open world sandboxy combat" is the only game they want to make... it's that these huge amazing engines almost require that sort of game, because that's what the engines do. The sameyness of AAA games is forced by this graphics focus because trying to do provide anything but the simplest game mechanics with this level of graphics is impossible.

Which is why I've basically abandoned AAA and go indie now. The lower graphics capability turns out to be a positive advantage, because it means they can make games that aren't The AAA Engine Game.


> Then we get to the game portion of the demo, and... it's a half-step above a quicktime event. [...] The sameyness of AAA games is forced by this graphics focus because trying to do provide anything but the simplest game mechanics with this level of graphics is impossible.

Yeah, was my impression of the gameplay we saw, and my worry about whether it will encourage that kind of design.


Yes, you got it exactly. This is the democratization of AAA graphics. This whole city was made with just a team of 70. (Red Dead Redemption 2 had a team of 1,600.) They relied heavily on Epic’s giant library of game assets that anyone can use. They used procedural generation to mix those assets in to create the map and random procedural generation of the pedestrians. The lighting and physics system are also Unreal Engine features. This whole Matrix project is getting released so devs can use it and base their games off it. The graphics are better, yes, but what this is really selling is the ability to make great looking AAA quality games in less time with fewer people.


To be honest, a team of 70 isn't going to be enough if you're going to make an actual game as big as RDR2 rather than just a tech demo. I'm not convinced until a game studio actually makes a standalone game in UE5 and reports its workflow has become better.


Workflow has improved a lot in ways that can't really be worse, e.g. projects are broken up into lots of small files now so it's a lot more compatible with version control.


> Does this somehow open the door for smaller teams to create games that involve photorealism

Yes, less dealing with LODs using nanite, less worry about drawcall count, less budgeting tri count. And Epic provides huge photogrammetry scan libraries that work well with it.


> Does anyone else kinda feel like videogames look good enough at this point?

People have been saying that for years, and then the following year comes along that blows away expectations yet again. IMO, I won't be satisfied until I literally can't tell an artificial rendering from a real photo/video.


The possibilities for the creators are endless as Unreal Engine is trying to have a demo on what amazing effects will the Unreal Engine have in the future.



The car chase sequence, lighting, textures, level of detail, world building are simply amazing (as played on PS5)


Well since the PS4, poor animation has been a way bigger problem than rendering...


Is there a PC version to try out? Kind of a shame it is limited to consoles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: