JB0 Posted March 15 Posted March 15 5 hours ago, David Hingtgen said: This! You don't realize how many little things are "essential" for daily usage until "something doesn't work like you're used to" and you curse whatever developer made "the stupid way" the default behavior of Win 10/11 and now you have to figure out how to change it yet again... Until it's "how I want it", it's just A computer and not MY computer. I mean, I've had that problem for years. Every time I reinstalled Win98(I believe we skipped past 95 back then), opened a File Explorer window, and went "where are my file extensions? Argh!" I am used to there being a million tiny things that need to be toggled before it is the way it is supposed to be. Quote
David Hingtgen Posted March 15 Posted March 15 Toggles/options are fine, the problem is now you need registry hacks and/or entire 3rd-party programs just to restore basic options and functionality that used to be present. If I could simply "click a long list of options" that'd be fine and only take minutes. But instead it's a game of "touch and go" for "how many registry alterations and programs can co-exist before they start causing problems"? Quote
davidwhangchoi Posted Monday at 06:38 PM Posted Monday at 06:38 PM i guess i have to get the next greatest gpu from nvidia /s Quote
azrael Posted Tuesday at 06:36 AM Author Posted Tuesday at 06:36 AM 11 hours ago, davidwhangchoi said: i guess i have to get the next greatest gpu from nvidia /s *Looks at note* So it took 2...two...dos...ni...deux...dyo...dalawa...dau...5090s to perform those renders. Yeah. You're gonna need the latest and greatest Nvidia gpu and a freakin' nuclear reactor. I know we joke about it but seriously...Nvidia is gonna make it a reality at this rate. As for DLSS 5...so Nvidia said "Frak your artists, let's AI slop your character faces." I'm sorry, that is not "photo realistic lighting effects" at play. It's literally AI slop-ifying game designer's work. This comment: Quote Real-time AI enshittification filter was not on my 2026 bingo 😄 Quote
JB0 Posted Tuesday at 10:42 PM Posted Tuesday at 10:42 PM 16 hours ago, azrael said: As for DLSS 5...so Nvidia said "Frak your artists, let's AI slop your character faces." I'm sorry, that is not "photo realistic lighting effects" at play. It's literally AI slop-ifying game designer's work. I mean, that's what DLSS has been from the start. They're just taking the chains off now. Quote
mikeszekely Posted Wednesday at 12:53 AM Posted Wednesday at 12:53 AM (edited) I feel like I'm in bizarro world. Everyone's piling so much hate on DLSS 5, and I'm sitting here thinking, "yeah, that looks pretty good." Everyone's so focused on Grace from RE9 looking all yassified, but I'm noticing how it makes characters in Starfield look like actual humans and not robots wearing the faces of humans they skinned. Edited Wednesday at 11:28 AM by mikeszekely Quote
TangledThorns Posted Wednesday at 11:49 AM Posted Wednesday at 11:49 AM I don't know what to make of DLSS 5 it just yet. How will it actually look in my own gameplay is what matters. Quote
David Hingtgen Posted Wednesday at 07:57 PM Posted Wednesday at 07:57 PM I had a similar question---if it's basically AI "guessing" at details, is it effectively deepfaking the faces? Based on what? Will it be unique for each user/setup? If you take a screencap of yours, and someone else with identical settings takes a screencap of theirs---will they be identical? Will EXACTLY how a character's facial features appear be influenced by randomness/seeds etc? Quote
mikeszekely Posted Wednesday at 10:46 PM Posted Wednesday at 10:46 PM 2 hours ago, David Hingtgen said: I had a similar question---if it's basically AI "guessing" at details, is it effectively deepfaking the faces? Based on what? Will it be unique for each user/setup? If you take a screencap of yours, and someone else with identical settings takes a screencap of theirs---will they be identical? Will EXACTLY how a character's facial features appear be influenced by randomness/seeds etc? I don't have all the technical answers. The AI is using a bunch of data from the game about color, lighting, motion, etc, but the instead of drawing the frame exactly it interprets how it thinks it should look. As a lot of people have pointed out with Claire, the result can look less like how you expected it to and more like Tilly Norwood (an AI-generated "actress"). But, IMHO, I also think it can take some rather unnatural character models and make them look more natural (see Starfield). I don't think it's totally deepfaking, but it's not exactly limited to just lighting, either. Developers get a lot of say in how much or how little DLSS 5 can actually do, and I'd hope the model's output is consistent, but I guess we won't really know until it's out in the wild. Quote
Dynaman Posted Wednesday at 11:15 PM Posted Wednesday at 11:15 PM The problem with AI guessing is that it ends up a lot like third party cheapo animation houses. They guess one way one frame and another way the next frame... Quote
azrael Posted Wednesday at 11:31 PM Author Posted Wednesday at 11:31 PM 4 hours ago, David Hingtgen said: I had a similar question---if it's basically AI "guessing" at details, is it effectively deepfaking the faces? Based on what? Will it be unique for each user/setup? If you take a screencap of yours, and someone else with identical settings takes a screencap of theirs---will they be identical? Will EXACTLY how a character's facial features appear be influenced by randomness/seeds etc? Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash - The CEO says artistic control remains with developers. (Tom's Hardware) Quote "The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," Huang continued. He added that developers can still "fine-tune the generative AI" to make it match their style, adding that DLSS 5 adds generative capability to the existing geometry of the game, but that it "doesn't change the artistic control." "It’s not post-processing, it’s not post-processing at the frame level, it’s generative control at the geometry level," he said. ... "All of that is in the control — direct control — of the game developer," he said. This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering." I don't know. I'm still trying to understand what Jensen is saying in the article. It's not generative AI but it's content-controlled generative AI. Is that not generative AI, but with a different input source? Are you not still guessing what the next frame will be showing? Digital Foundry did a video after the announcement where they did a first look at DLSS 5 and they were told by Nvidia it's going into Nvdia Streamline so optional now, required later. 🤷♂️ So eventually game devs have to bake it in(?). Maybe we need reviewers to setup 2 identical systems, run the same title at the exact rendering point and see if they can spot a difference with DLSS 5 turned on. Do identical machines produce the same or different image? How about the differential? Do 2 different machines produce the same image or will they be different and by how much? If they are different, then how much of a difference will gamers tolerate if each setup has a slightly different fidelity? Quote
David Hingtgen Posted Thursday at 12:44 AM Posted Thursday at 12:44 AM I'd be more for it if the PLAYERS could have some input control. If it's gonna "generate" the finer details of the facial geometry etc anyways, can I say "look more like X, and less like Y?". Just my luck the AI or its model would be "influenced" by someone I don't like and keep trying to use them as "inspiration for its generating control" or whatever buzzwords they're using to describe what sure sounds like typical AI "guessing at details". If the face isn't going to be exactly what the artist designed, then the control should be in the end user's hands, not the GPU's. Hope the base model has advanced a LOT since Obama was in office: Quote
David Hingtgen Posted Thursday at 06:14 PM Posted Thursday at 06:14 PM Seriously wondering if I should keep my 3060 as an "emergency backup" instead of selling it... (is it worth selling DDR4 in the current market? Got 48GB of it...) Quote
mikeszekely Posted Thursday at 07:19 PM Posted Thursday at 07:19 PM 58 minutes ago, David Hingtgen said: (is it worth selling DDR4 in the current market? Got 48GB of it...) Yes, the RAMpocalyse is affecting DDR4 prices, too. I just paid $60 for two 8GB sticks to upgrade a barely-used 4yo desktop. I'm guess that yours is six 8GB sticks? That's nearly $200 worth of RAM. Thankfully I had a spare 128GB NVME drive since it was originally booting off a mechanical SATA drive. Quote
azrael Posted Thursday at 07:43 PM Author Posted Thursday at 07:43 PM 2 hours ago, David Hingtgen said: Seriously wondering if I should keep my 3060 as an "emergency backup" instead of selling it... I'm keeping my 3070Ti as a backup but only because my board doesn't support PCIe 5 (which is what my 5070Ti runs at) and I need my 3070Ti to manually set the PCIe slot to run at PCIe 4 so I can then put in my 5070Ti any time I flash the BIOS. And if my 5070Ti ever decides to self-immolate, I have a backup (Note that while there are only a handful of reports of the 5070Ti 12VHPWR vs the 5080/5090 self-immolating, it's still a non-zero number of incidents and until something is done to be sure the 12VHPWR connector doesn't melt, I'm not taking any chances). Quote
David Hingtgen Posted Thursday at 08:22 PM Posted Thursday at 08:22 PM My DDR4 is 2x16 and 2x8. They are not identical in specs but very close and have never had a single error running together. (DDR4 is pretty accommodating with mixing and matching, so long as it's close). Quote
davidwhangchoi Posted yesterday at 05:30 PM Posted yesterday at 05:30 PM "NVIDIA has now confirmed that DLSS 5 takes a 2D rendered frame plus motion vectors as input. That clarification came in follow-up answers to questions about how the new technology actually works. The answer is direct, and it does not fully match the earlier impression created by NVIDIA’s own marketing language around geometry, materials, and scene understanding. DLSS 5 does not appear to read 3D geometry, depth, or artist-authored material data directly from the game engine. When asked whether the model is effectively taking a single 2D frame with motion vectors to create the output, the answer was: “Yes, DLSS 5 takes a 2D frame plus motion vectors as input.” NVIDIA also said the model understands scene semantics such as hair, fabric, skin, and lighting conditions “all by analyzing a single frame.” NVIDIA had earlier described DLSS 5 in a way that suggested a deeper understanding of the scene. The follow-up answers paint a narrower picture. When asked whether the model reads PBR (Physically Based Rendering) properties from the engine, NVIDIA said: “DLSS 5 only takes the rendered frame and... That may explain why some preview images raised concerns. In one example, a character appears to gain hair detail in an area where it was not visible before. In another, facial details appear altered enough (like the nose) to raise questions about whether the model is changing the look of the of the character rather than only improving lighting." Source: VideoCardz.com https://videocardz.com/newz/nvidia-confirms-dlss-5-uses-a-2d-frame-plus-motion-vectors-as-input In other words, Nvidia lying as usual and it's pretty much typical AI generating faces. Quote
kajnrig Posted 23 hours ago Posted 23 hours ago 3 hours ago, davidwhangchoi said: Source: VideoCardz.com https://videocardz.com/newz/nvidia-confirms-dlss-5-uses-a-2d-frame-plus-motion-vectors-as-input More direct source: Quote
Tking22 Posted 21 hours ago Posted 21 hours ago Lol I hate it, it's basically an AI tiktok filter you can put on games and make them look, different, not necessarily better or more detailed. Quote
mikeszekely Posted 20 hours ago Posted 20 hours ago Got my MacBook Neo today. I have no idea how newer MacBooks are, but I can compare to an M1 Air. Some first impressions: Keyboard is fine, I guess. Definitely feels like the keys are a bit more rattle-y or require less force to actuate than the Air. Don't really care too much about the loss of the backlit keys, since I'm a touch typist anyway. Screen's ever so slightly smaller, but noticeably brighter. I never noticed that the trackpad on the M1 Air doesn't actually physically click. It does on the Neo. It's not necessarily a bad feeling, just different. I love the new colors. I got the Citrus one. Beats the heck out of the boring silver. I know a lot of people are complaining about the lack of ports... do newer MacBooks have more? There's two USB-C on the left, same as the M1 Air. There's also a headphone jack near the front on the same side. The Air has one on the opposite side. So I/O is the same for me. The form factor is slightly different. Again, I don't know how newer Airs are, but my M1 is thickest near the rear and tapers toward the front. The Neo retains the same thickness throughout. The shape actually reminds me of an old MacBook I had like 20 years ago, the one with the Intel Core 2 Duo. It's obviously much thinner than that MacBook, but it makes the Air seem thinner. My subjective feelings on the performance are still up-in-the-air. The M1 with 8GB of RAM was already enough for most of the stuff I used a Mac for. On paper the A18 Pro should be an upgrade over the M1, if not a huge one. Quote
azrael Posted 19 hours ago Author Posted 19 hours ago 10 minutes ago, mikeszekely said: I know a lot of people are complaining about the lack of ports... do newer MacBooks have more? There's two USB-C on the left, same as the M1 Air. There's also a headphone jack near the front on the same side. The Air has one on the opposite side. So I/O is the same for me. The Air has two USB-C Thunderbolt 4 ports and a Magsafe power connector. The Neo has 2 USB-C ports but 1 is USB3 and the other port is USB2. The A18 Pro USB controller is a USB 3.2 Gen2/10 Gbps so if you connect that to a 4k-60hz monitor and leave the screen open on the laptop, there isn't THAT much bandwidth left so I'm not sure what people were expecting. Time to digest more of this DLSS 5 content... Quote
mikeszekely Posted 18 hours ago Posted 18 hours ago 1 hour ago, azrael said: The Air has two USB-C Thunderbolt 4 ports and a Magsafe power connector. The M1 Air doesn't have Magsafe, that didn't come until the M2. The downgraded USB port is a bummer, but when I do hook it up to a display I wind up using a hub anyway, so while it's one aspect that really is a downgrade it's one that really won't affect me personally. I'd say the Neo is a capable computer for the price, and probably good enough for most people and a great starting point for people looking to make the switch to MacOS. If you're using an M1 it is, like I said, a modest but inexpensive upgrade, for the most part. If you need more power, or your running an M2 or newer, though, you should be looking at an M4 or M5 Air, likely. Quote
David Hingtgen Posted 2 hours ago Posted 2 hours ago 19 hours ago, Tking22 said: Lol I hate it, it's basically an AI tiktok filter you can put on games and make them look, different, not necessarily better or more detailed. I'm very curious on how it works on non-human characters. "Hey, Mass Effect Super-Legendary Remake. Blue aliens---must be Na'vi, add the Na'vi filter!" Because that's probably the only "blue humanoid aliens" it's trained on... Quote
azrael Posted 1 hour ago Author Posted 1 hour ago 32 minutes ago, David Hingtgen said: I'm very curious on how it works on non-human characters. "Hey, Mass Effect Super-Legendary Remake. Blue aliens---must be Na'vi, add the Na'vi filter!" Because that's probably the only "blue humanoid aliens" it's trained on... Or aliens in general (not just humanoid alien-types). Judging by Nvidia's responses in Daniel Owens' video, DLSS 5 isn't even a TikTok/Instagram filter. It's straight-up re-interpreting the image based on the game engine's input. Not even an overlay. They've showcased a lot of people in their renders but what about in-animate objects. A shadow cast on a textured wall could be reinterpreted as, for example, a different color with cracks or a different texture altogether in one frame before being correctly interpreted in the next frame by DLSS. Random dirty graffiti on a wall could be reinterpreted as moss in a random scene in Last of Us just because the AI thinks it's moss or something just by the way it was drawn by the game engine, lighting was set for the scene, etc. We shall see though. I think the more important takeaways right now are a) Nvidia generated those renders with two 5090s, b) these tools are further locking in developers to Nvidia's software. Nvidia says you only need 1 card to do this but they're not making the case by using two cards, and surely not helping by using their halo product line. If Nvidia is pushing more resources to neuro rendering, will the next RTX cards carry less raw performance and lean more heavily on AI-accelerating hardware? I think that is the direction they are heading. It's the direction they've always been heading. We've seen rasterization performance slowly being less focused by Nvidia. On 3/17/2026 at 3:42 PM, JB0 said: I mean, that's what DLSS has been from the start. They're just taking the chains off now. I think the better saying is "Masks off." The other is locking developers into their ecosystem. With AMD slipping to only 5% of the GPU market and Intel barely being a blip on the chart, Nvidia is barrelling to a monopoly. And we know where that will lead. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.