Coverage of a very simple novel new method of vestibular simulation that basically acts as white noise and prevents motion sickness. The inventor of the technology posts in the thread. Excerpt (definitely read the whole thing if you’re interested)
For those who mentioned hacking the
technology together, that may be your best bet in the short term.
Otolith is a medical device company focused primarily on the vertigo
market. We’re open to licensing our technology to large OEMs but have no
plans to become an aftermarket retailer for the consumer market. You
can get everything you need to build something that works from adafriut,
but you’ll probably run into the same issues I did four years ago -
namely sound, heat, volume, and weight (potentially less of a problem
with VR than car sickness since you’re already wearing an ugly bulky
device).
We’ve
held back releasing our clinical research until we were closer to FDA
submission. This Spring the motion sickness and VR sickness research
should be submitted to the Journal of Vestibular Research and Frontiers
Journal of Neurology, respectively. The results have been that 100% of
people who experienced motion or VR sickness symptoms had a significant
improvement. For VR this was first tested under an IRB approved study
with U of Maryland with a program that had a full vision of dots that
rotated at an increasing speed. Subjects used the space bar to start the
dots rotating from rest and to stop the dots when they first
experienced discomfort. For those susceptible to VR sickness that took
~20 seconds. Each subject was tested 16 times (8 power levels with the
device turned off and on). Once the ideal power level was defined a
second cohort was used with frequency being the variable. Our IP is
wrapped up in being the first to quantitatively define what vibrations
had a therapeutic effect. The results were unexpected and not mentioned
in any other patents, publications, or products.
The
second VR test was with Eve: Valkyrie. Participants were asked to play
it for 15 minutes and fill out the clinically validated Motion Sickness
Assessment Questionnaire (MSAQ). On a following day the subjects played
it again with OtoTech. The results without OtoTech was 35% had to drop
out before 15 minutes and 75% had at least one symptom a 5 out of 9 or
above. The results with OtoTech was nobody dropped out or had any
symptoms at a 5 or above. VR Motion, mentioned in the article, has used
our technology on over 1000 people on their driving simulation platform.
Prior to using OtoTech 20-30% of users had to stop early, since using
OtoTech 2 people have had to stop early. For all intents and purposes,
OtoTech is a universal solution but unfortunately VR isn’t our target
market.
Here’s the September 2018 version of Ramez Naam’s energy talk, and it’s really fascinating to basically see all the numbers update - there’s a new insightful bit about how ridesharing and autonomy (combined with EVs of course) will upend vehicle/transportation economics.
Here’s the October 2017 version:
Ramez is one of the best/most insightful writers on energy trends/economics and I highly recommend anyone with an interest in the future to check out more at http://rameznaam.com/tag/energy/
The MinION weighs under 100 g and plugs into a PC or laptop using a high-speed USB 3.0 cable. No additional computing infrastructure is required. Not constrained to a laboratory environment, it has been used up a mountain, in a jungle, in the arctic and on the International Space Station.
The MinION is commercially available, simply by paying a starter-pack fee of $1,000. The MinION starter pack includes materials you need to run initial sequencing experiments, including a MinION device, flow cells and kits, as well as membership of the Nanopore Community.
More gene sequencing research and notes soon, collecting a lot of health research recently.
This paper presents a simple method for “do as I do" motion transfer: given
a source video of a person dancing we can transfer that performance to a
novel (amateur) target after only a few minutes of the target subject performing
standard moves. We pose this problem as a per-frame image-to-image
translation with spatio-temporal smoothing. Using pose detections as an
intermediate representation between source and target, we learn a mapping
from pose images to a target subject’s appearance. We adapt this setup for
temporally coherent video generation including realistic face synthesis. Our
video demo can be found at https://youtu.be/PCBTZh41Ris.
Additional Key Words and Phrases: Motion transfer, Video generation, Generative
adversarial networks
This is a pretty interesting writeup on the state of Waymo’s ambitions.
Investment bank UBS estimated global revenues from self-driving technology by 2030 will be up to $2.8 trillion, with Waymo capturing a whopping 60 percent of that market. Morgan Stanley recently upped its valuation of Waymo to a staggering $175 billion, $80 billion of which is expected to come from the company’s ride-hailing services. Under CEO John Krafcik, the former Google self-driving team has evolved from a humble science project to a fully formed company with four specific targets to deploy its technology: ride-hailing, logistics, privately owned vehicles, and public transportation.
For its robot taxi service, the company has reached deals to buy up to 62,000 plug-in hybrid Chrysler Pacifica minivans and 20,000 all-electric Jaguar I-Pace SUVs to build up its fleet over the next few years. Its first market will be Phoenix, with subsequent launches likely in test cities like Mountain View, San Francisco, Detroit, and Atlanta.
Waymo is also planning on launching a self-driving trucking service. It has outfitted several Peterbilt Class 8 semi trucks with autonomous hardware and software, which are currently hauling equipment to Alphabet facilities in Atlanta. Waymo is also in talks with Honda to co-create a self-driving delivery vehicle from scratch.
Until recently, Waymo has spoken only vaguely about licensing its self-driving hardware and software to automakers. But then in May, it announced it was in talks with Fiat Chrysler about developing self-driving cars you could buy at a dealership. Krafcik has said that Waymo is also in discussions with “more than 50 percent” of the global auto industry, and the introduction of self-driving cars for personal use will trail its ride-hailing service by “a couple years.”
For those that are into high end computer graphics, the announcement of the new Nvidia Turing hardware is a pretty big deal - it looks like it is now ushering in the age of real-time ray tracing. It also looks like it’ll allow drop-in replacement for high-end CPU render farms (about 10 Xeons per GPU).
This is a followup to Nvidia’s earlier ray tracing announcements earlier thie year at GDC/GTC. Earlier this year Unreal published a pretty good article/presentation on integrating ray tracing into their pipeline.
Nvidia is pacaking this up as their RTX platform (see also: OptiX) which is largely ray tracing focused, while AMD’s strategy is creating an open stardard/platform called ProRender that allows easily mixing rasterized and raytraced rendering.
It appears Nvidia has been pushing very hard on RT ray tracing, making some big hires, and spending years working on custom hardware. Translated from a Chinese ex-Nvidia GPU architect:
The RT core essentially adds a dedicated pipeline (ASIC) to the SM to calculate the ray and triangle intersection. It can access the BVH and configure some L0 buffers to reduce the delay of BVH and triangle data access. The request is made by SM. The instruction is issued, and the result is returned to the SM’s local register. The interleaved instruction and other arithmetic or memory io instructions can be concurrent. Because it is an ASIC-specific circuit logic, performance/mm2 can be increased by an order of magnitude compared to the use of shader code for intersection calculation. Although I have left the NV, I was involved in the design of the Turing architecture. I was responsible for variable rate coloring. I am excited to see the release now.
Here’s an interesting discussion from earlier in the year that focuses on denoising, and some a simple discussion of various ray tracing issues:
Irrespective of real-time raytracing, realtime graphics are looking pretty good these days. Here’s something Unity published at the beginning of the year:
While I’m posting interesting graphics stuff, here’s one on Light Fields in VR (super awesome) and on rendering Fractals in VR:
Here’s a video of the Looking Glass Light Field Display (single-axis). The Kickstarter is going on now, it’s quite cool.
What’s interesting with Looking Glass is that they’ve been working for years iterating on relatively lo-fi versions vs companies like Fovi3D who are pushing the state of the art, but also just slogging away at the high-end vertical markets while they R&D Here’s an interesting interview from SID w/ the Fovi3D CTO:
I just watched a video covering Broadbit, a battery startup with a metallic sodium chemistry that promises a specific energy of 300-400kWh/kg (vs 250kWh/kg for current lions). They presented last year at IDTechEx - it looks like the US show is happening Nov 14-15 @ Santa Clara and will be showing off 3D Printing, EVs, energy storage, graphene, IoT, sensors, etc - $150 for an exhibition pass, might be worth a walkthrough…