Technology is surrounding us; its surface is becoming more complex, pliable and familiar to the eye. Virtual reality is no longer creeping into the mainstream: It’s leaping.
Just last month, secretive VR startup Magic Leap received more than $793 million in new funding through Google, Qualcomm and others, quickening its progress toward creating seamless experiences in which digital and physical worlds collide.
When MIT Technology Review editor Rachel Metz visited Magic Leap’s headquarters, she discovered a world where crisp virtual characters were already roaming the halls, waiting for a device to be perfected that will bring them into the public eye.
Not to be outdone, Microsoft revealed its plans to release a sleek-looking augmented reality headset this year, too. HoloLens will allow users to interact with holographic surroundings and characters, and Microsoft partners Volvo, NASA and Trimble are already testing it in the business world.
Though the industry is giddy with the potential of VR, the consumer is still a step away from being able to enjoy a virtual experience while riding the bus.
The Collection Wall, a 40-foot interactive, multitouch, MicroTile wall, displays in real time all works of art from the permanent collection currently on view in the galleries—between 4,200 and 4,500 artworks at any given time. In addition, the Collection Wall displays thematic groupings that may include highlighted artworks currently on loan as well as select light-sensitive artworks that are in storage.
The Collection Wall facilitates discovery and dialogue with other visitors and can serve as an orientation experience, allowing visitors to download existing tours or create their own tours to take out into the galleries on iPads and iPhones. The largest such screen in the United States, the Collection Wall enables visitors to connect with objects in the collection in a playful and original way, making their visit a more powerful personal experience. Its display transitions every 40 seconds to keep things interesting—grouping artworks by theme and type, such as time period or materials and techniques, as well as by 32 curated views of the collection.
How it works
Standing 5 feet by 40 feet, the wall is composed of 150 Christie MicroTiles and displays more than 23 million pixels, which is the equivalent of more than twenty-three 720p HDTVs. The Christie iKit multitouch system allows multiple users to interact with the wall, simultaneously opening as many as 20 separate interfaces across the Collection Wall to explore the collection. Software was written using open Frameworks and runs on two Windows 7 workstations supported by four Linux servers processing the video across the wall, and an RFID server managing the iPad/iPhone station connectivity.
High-resolution digital cameras ranging from 48 to 192 megapixels were used to photograph the CMA’s collection. These are publication-quality photographs as large as 50 by 40 inches, which will enlarge on a standard iPad, iPhone, or computer monitor to 220 by 160 inches for examining detail.
Every 10 minutes, an application content management system updates the wall with high-resolution images of artwork, metadata, and the frequency with which each artwork has been “favorited” on the wall and from within the ArtLens iPad/iPhone app. Users can save favorites to their iPad/iPhone from the wall by placing their device on one of eight docking stations, which identify an iPad/iPhone by detecting an RFID chip on the back of its case. The visitor’s favoriting and sharing activity creates metrics that enable museum staff to understand what artworks visitors are engaging with, creating a feedback loop with the museum. Visitors can also queue curated themes to display on the Collection Wall, playing them like a jukebox that changes every 40 seconds. These themes can be changed dynamically to connect with temporary exhibitions or create new ideas for the permanent collection.
In order to ensure that the content of the Collection Wall and the app is dynamic and maintainable, all information is pulled directly from our digital asset management systems. Therefore, any new accession or an object that has gone off view is immediately incorporated into the wall and iPad/iPhone app.
Face swap camera apps are all the rage these days, and Facebook even acquired one this month to get into the game. But the technology is getting more and more creepy: you can now hijack someone else’s face in real-time video.
A team of researchers at the University of Erlangen-Nuremberg, Max Planck Institute for Informatics, and Stanford University are working on a project called Face2Face, which is described as “real-time face capture and reenactment of RGB videos.”
Basically, they’re working on technology that lets you take over the face of anyone in a video clip. By sitting in front of an ordinary webcam, you can, in real-time, manipulate the face of someone in a target video. The result is convincing and photo-realistic.
The face swap is done by tracking the facial expressions of both the subject and the target, doing a super fast “deformation transfer” between the two, warping the mouth to produce an accurate fit, and rerendering the synthesized face and blending it with real-world illumination.
To test the system, the researchers invited subjects to puppeteer the faces of famous people (e.g. George W. Bush, Vladimir Putin, and Arnold Schwarzenegger) in video clips found on YouTube. You can see the results (and an explanation of the technology) in this 6.5-minute video: