You have to back up from that question. “How to be present in a video call” is already an answered question.
The “when you’re using the headset” part is the issue. Why are we using the headset? What are the benefits? Why am I making these tradeoffs like messing up my hair, putting a heavy device on my head, messing up my makeup, etc.
This is like saying “The Segway had advanced self-leveling to solve the problem of how to balance when you’re on an upright two wheel device”.
But why are you on an upright two wheel device? Why not just add a third wheel? Why not ride a bicycle? Why not ride a scooter?
The solution is really cool and technologically advanced but it doesn’t actually solve anything besides an artificially introduced problem.
Not really, because this misses the premise of why the device itself is useful.
VR/AR headsets are useful for working on and demonstrating many things that we've had to compromise to fit into a 2D paradigm. Being able to be present with that 3D model has clear advantages over using, for example, a mouse with a 2D equivalent or a 3D projection.
Having to justify how the 3rd dimension is useful is probably a conversation where one party is not engaging in good faith.
The segway analogue is also pretty poor considering how useful self-balancing mobility devices have proven to be - including those which only possess a single wheel.
> Why would I be paying all this money for this realistic telepresence when my shitbox HP laptop from Walmart has a perfectly serviceable webcam?
I gave a pretty straightforward answer for why this feature would exist in this product. People sometimes on this forums ask legitimate questions.
It's pretty clear you weren't, rather you're seeking an opportunity to merely push some tired agenda, likely tied to some personal vendetta, and you're doing a pretty piss-poor job of it.
You say that me making you justify the third dimension is bad faith arguing, but you never even attempted to justify it. You actually do have to justify it because so many 3D technologies have been market duds. VR gaming sputtered into decline, 3D televisions died off, glasses-free 3D is nowhere to be found anymore after the 3DS and that crazy Red smartphone…you actually very much do have to justify that there’s demand for this new paradigm.
> VR/AR headsets are useful for working on and demonstrating many things
What things?
> that we've had to compromise to fit into a 2D paradigm.
What compromises?
> Being able to be present with that 3D model has clear advantages over using, for example, a mouse with a 2D equivalent or a 3D projection.
What advantages?
I think if this was even a niche representation of the future we’d see specialized companies with 3D-oriented software like Autodesk jumping all over the the Vision Pro specifically, but they seem to be nowhere to be found. All the key players in the industry besides Meta have basically bailed, including Microsoft and Google shutting down commercial/industrial solutions that had previously been touted as successful.
I have no vendetta here, I just think that full immersion VR was the wrong play for productivity and general computing. I think that the full immersion VR market is dying and that solutions like Meta Ray-Bans and VITURE glasses are way more palatable because they are way more “normal,” including the way they eschew these moonshot paradigm-shifting technologies that might actually work very well, but nobody asked for.
Nobody wants to be a 3D avatar and work inside a headset where your view of the outside world is desaturated by cameras because it’s cringe and weird.
As a side note I will also point out that if you use a Vision Pro with a MacBook to use the secondary screen functionality (required for writing code or running apps outside the App Store) you’re basically doing the exact same thing as VITURE glasses except you paid 10x more and your battery life sucks. And you can just join a standard conference call on your glasses and essentially look normal.
1. The scanning is fast, it takes longer to set up a fingerprint on a macbook air. Just turning the head from side to side, then up and down, smiling and raising one's eyebrows.
2. I used the M5, and the processing time to generate the persona was quick. I didn't time it, but it felt like less than 10 seconds.
3. My cheeks tend to restrict smiling while wearing the headset, it works but people that know me understood what I meant when I said my smile was hindered.
4. Despite the limited actions used for set up, it reproduces a far greater range of facial movements. For example if I do the invisible string trick, it captures my lips correctly (when you move the top lip in one direction and the lower lip in the opposite direction, as if pulled by a string.)
5. I wasn't expecting this big of a jump in quality from the v1.
An frequently overlooked point is the display brightness. The pro models offer 1600 nits peak brightness, which makes these good units for looking at HDR content, especially if you like to take photos or edit videos. Meanwhile the Air maxes out at 500 nits, so the effect and contrast is drastically reduced for those models.
> and these being mini-LED displays, contrast is already infinite.
I think you may have mixed up mini-LED backlighting with OLED and microLED displays. mini-LED backlights merely allow for better local dimming of the backlight behind an LCD, but the number of independently variable backlight zones is still orders of magnitude smaller than the number of pixels. Over short distances, an LCD with local dimming is still susceptible to all of the contrast-limiting downsides of an LCD with a uniform static backlight (and local dimming brings new challenges of its own).
OLED is the mainstream display technology where individual pixels directly emit their own light, so you can truly have a completely black pixel next to a lit pixel. But there are still layers and coatings between the OLED and the user, so infinite contrast isn't actually achievable.
microLED is an unsuccessful technology to provide the benefits of OLED without as many of the downsides (primarily, the uneven aging). But nobody has managed to make large microLED displays economically yet, and it doesn't look like the tech will be going mainstream anytime soon.
> but the number of independently variable backlight zones is still orders of magnitude smaller than the number of pixels
The appearance of a lone mouse cursor on a black screen in the dark is mildly amusing for exactly this reason. You can watch as the ghostly halo of light follows it around the screen as you move the cursor.
I'll upgrade my machine when they put an OLED display in it.
Contrast is significantly poorer on the Air display, and HDR is already in your own photos if you have a modern smartphone, so the idea that it’s niche or irrelevant is a naive take.
The perceptual difference between sdr and hdr isn’t a minor bump, it is conspicuous and driver of realism.
If one cares about the refresh rate of their screen, then they’d trivially notice the improvement that high nit displays provide.
It does sound paradoxical, but it's the difference between steering information to things that serve you, versus having others steer the information you see to things that serve them.
Reddit right now is in a very bad place. It's passed the threshold where bots are posting and replying to themselves. If humans left the platform it would probably look much the same as it does now.
The result is a noticeable uptick in forums moving to discord or rolling their own websites. Which is probably a good thing for dodging the obvious commercial manipulation, propaganda and foreign influence vectors.
I haven't played minecraft in a fair while, but started with the alpha builds back when the Seecret updates were the most exciting thing going for the game.
> I'd assume this means that any official modding support is now stone dead and will never happen.
I was a bit surprised to read this because talk of modding support had been on the radar since notch days, it's wild to me that this hasn't happened yet.
I suspect Minecraft was large enough to support an effective modding community from the start regardless of official support, so that there was always some kind of third-party unofficial mechanism (ModLoader, then Forge, then now Fabric and Quilt). Mojang probably punted it down the priority list because of that, or didn't want to impose a structure and kill those ecosystems. Technically speaking, Java is reasonably easy to plug stuff into at runtime, so that was never a barrier.
The original issue with official modding support, from my perspective, has always been a legal one. But the Mojang EULA explicitly allows modding now. So I would see this decision as one in a long line of decisions by Mojang to both normalise the legal relationship with modders, and beyond that giving a "thumbs up" to the community.
One of the big issues with that phone was that in order to do dynamic perspective, you're having to run a 3D render at 60fps constantly. That's a huge power hog, and prevents you from doing many of the power savings techniques you otherwise could on a normal phone -- shutting down the GPU, reduced refresh rate, heck, even RAM backed displays.
1. Educate children about bad actors and scams. (We already do this in off-line contexts.)
2. Use available tools to limit exposure. Without this children will run into such content even when not seeking it. As demonstrated with Tiktok seemingly sending new accounts to sexualised content,(1) and Google/Meta's pathetic ad controls.
3. Be firm about when is the right age to have their own phone. There is zero possibility that they'll be able to have one secretly without a responsible parent discovering it.
4. Schools should not permit phone use during school time (enforced in numerous regions already.)
5. If governments have particular issues with websites, they can use their existing powers to block or limit access. While this is "whack-a-mole", the idea of asking each offshore offending website to comply is also "whack-a-mole" and a longer path to the intended goal.
6. Don't make the EU's "cookies" mistake. E.g. If the goal is to block tracking, then outlaw tracking, do not enact proxy rules that serve only as creative challenges to keep the status quo.
and the big one:
7. Parents must accept that their children will be exposed at some level, and need to be actively involved in the lives of their children so they can answer questions. This also means parenting in a way that doesn't condemn the child needlessly - condemnation is a sure strategy to ensure that the child won't approach their parents for help or with their questions.
Also some tips:
1. Set an example on appropriate use of social media. Doom scrolling on Tiktok and instagram in front of children is setting a bad example. Some housekeeping on personal behaviours will have a run on effect.
2. If they have social media accounts the algorithm is at some point going to recommend them to you. Be vigilant, but also handle the situation appropriately, jumping to condemnation just makes the child better at hiding their activity.
3. Don't post photos of your children online. It's not just an invasion of their privacy, but pedophile groups are known to collect, categorise and share even seemingly benign photos.
reply