This is why your should have a video on your web site. Let us help you today. Here are some examples of work we can do. What would you like and what is your budget? Please email George@apibestinclass.com manager of video productions to help you.
Video like other content formats, needs to be thought of as just another piece of the puzzle and that means giving it the same amount of attention as content like text and static images. Granted, it’s not always the easiest thing to accomplish, full integration.
Archive for VR/Augmented Reality/Computer Graphics
Think of the possibilities.
University of Central Florida assistant professor Jayan Thomas, in collaboration with Carnegie Mellon University Associate Professor Rongchao Jin, has developed a new material based on gold nanoparticles smaller than 2 nanometers, in a regime between atoms and nanoparticles called nanoclusters.
Thomas and his team found that nanoclusters developed by adding atoms in a sequential manner could provide interesting new optical properties that make them suitable for creating surfaces that would diffuse laser beams of high energy.
Protecting pilots and instruments from laser beams
Think of commercial pilots or fighter pilots’ glasses or helmet shield could be coated with nanoclusters that potentially diffuse high-energy beams of light, such as laser beams.
Highly sensitive instruments needed for navigation and other applications could also be protected in case of an enemy attack using high-energy laser beams.
Real time 3D telepresence
Thomas is also exploring the use of these particles in the polymer material used for 3D telepresence to make it more sensitive to light. If successful, it can take current polymers a step closer to developing real time 3D telepresence.
3D-Telepresence, aka the holodeck, would provide a holographic illusion to a viewer who is present in another location by giving that person a 360-degree view (in 3D) of everything that’s going on. It’s a step beyond 3-D and is expected to revolutionize the way people see television and in how they participate in activities around the world. For example, by allowing a viewer to “walk around” a remote location as if in a virtual game, a surgeon could help execute a complicated medical procedure from thousands of miles away.
Others who contributed to the new material include: Reji Philip from the UCF’s NanoScience Technology Center, Panit Chantharasupawong from UCF’s College of Optics and Photonics, and Huifeng Qian from Department of Chemistry at the Carnegie Mellon University.
I wonder what would happen if they combined this with a metamaterial? A diffusion-based cloaking device?
Taking the a step research further out, in 1993 J. Storrs Hall conceived the idea of utility fog, consisting of a swarm of nanobots (“foglets”) that can take the shape of virtually anything, and change shape on the fly.
Perhaps Thomas’ nanoclusters could one day be developed into self-assembling modules that actualize Storrs’ concept — and take it from the micron level down to the nanometer level?
In Utility Fog: The Stuff that Dreams Are Made Of, Storrs suggested that an appropriate mass of Utility Fog can be programmed to “simulate most of the physical properties of any macroscopic object (including air and water), to roughly the same precision those properties are measured by human senses.
That could include cars, houses, and just about any other object. “The pattern you both set your houses to could be anything, including a computer-generated illusion. In this way, Utility Fog can act as a transparent interface between ‘cyberspace’ and physical reality.”
“Consider the application of Utility Fog to a task such as telepresence. The worksite is enclosed in a cloud of Fog, which simulates the hands of the operators to assemble the parts and manipulate tools. The operator is likewise completely embedded in Fog. Here, the Fog simulates the objects that are at the worksite, and allows the operator to manipulate them.”
So what would you do with nanocluster foglets?
At the recent E3 2012 show, I saw the future of virtual reality and gaming.
It’s a robust stereoscopic head-mounted display (HMD) called the Oculus RIFT from hardware pioneer Palmer Luckey, shown off by legendary computer graphics guru John Carmack, technical director of Id Software.
Using aspheric lenses and side-by-side stereoscopy, the Oculus RIFT boasts a wide field-of-view of 90 degrees horizontal and 110 degrees vertical, with a target price of $500, according to Palmer, which totally kills anything on the market today in the consumer price range. It also uses a gyroscope for orientation data, so you can actually look around inside the game environment quite naturally.
As evidence of its importance, Carmack is integrating support for the Oculus Rift into Bethesda Softworks’ DOOM 3: BFG Edition, slated for release in North America in October. (If you want to learn more about the Oculus hardware, you can check out detailed specs here.
This is the beginning.
The RIFT isn’t yet ready to be a neat consumer package; it’s still a DIY device for enthusiasts and hackers, modders, and homebrewers, but “one of the big players will take this as one of the next steps in display [and] interaction technology,” as Carmack notes in the video.
It’s the spark of a revolution in the first-person shooter gaming niche.
Many DIYers on the Meant To Be Seen forums (Palmer’s DIY community, where he posts about his projects) are already working on open-source software projects to make the Oculus RIFT compatible with other first-person shooters besides Doom 3, such as Skyrim and Mirror’s Edge, as well as many Valve games, such as Portal 2 and Left 4 Dead 2.
I’ve had the chance to collaborate with Palmer over the past year or so on other VR projects, such as Shayd and Project Holodeck (a work in progress), and I’ve have worked with a number of his prototypes, like the PR4 and others.
The lenses that are built into the HMD tend to stretch the image output so that it wraps around a 90-degree field-of-view, thus giving the player a wide FOV that feels much like peripheral vision. To compensate for this wide FOV optical distortion, John Carmack coded the Doom 3: BFG Edition demo to pre-warp the image coming out of the screen.
This same warping effect can be utilized in all FPS games to make them compatible with the RIFT. One MTBS community member, Joshua Lieberman, AKA Emerson, is working on his open source project Biclops to achieve this “barrel” pre-warping with Skyrim and Mirror’s Edge
Other amazing VR games coming
Valve is also diving into wearable computing, and it is highly likely that they are already working with the Oculus RIFT to integrate it with their Source Engine (this link explains exactly how to integrate a gyroscope properly in the Source Engine for use with an HMD, and I’ve been playing a lot with that C++ code recently).
Thus existing games like Left 4 Dead 2 and Portal 2 could be experienced with enhanced visual immersion, and even orientational tracking of the head provided through a gyroscope input.
Due to the diverse and dynamic community based around the Source Engine, if the RIFT is made easily compatible, numerous mods would begin springing up that are built specifically for a head-mounted display.
Of course, other engines can be just as easily made compatible, particularly the new Unity Pro 4 Engine, which has already been used as the basis for numerous VR experiences at the MxR Lab and the USC Interactive Media Division. The current game I’m producing, Wild Skies, will be using Unity Pro 4 with the Oculus RIFT as well as a positional tracking for a full 360-degree virtual play space.
Consumer version coming
In his latest update, Palmer revealed he is talking with Valve, Bethesda, Epic, Crytek, and Unity about a consumer version of the headset to be developed in 2013. This new version will have built-in support for the most popular game engines, a higher-resolution screen, and wider field-of-view optics.
Now, since teaming up with Carmack a few months ago, Palmer can finally launch the final iteration of his RIFT, and dole out some harsh VR justice to the universe. A Kickstarter for of the first round of development of the RIFT head-mounted display is starting around July 19, according to the latest update from Palmer, to coincide with Carmack’s involvement at QuakeCon and Gamescom.
“Imagine an HMD with a massive field of view and more pixels than 1080p per eye, wireless PC link, built-in absolute head and hand/weapon/wand positioning, and native integration with some (if not all) of the major game engines, all for less than $1,000 USD,” Palmer says. “That can happen in 2013!”
There’s going to be a lot of innovation with this kind of hardware in the next ten years during the following console cycle — if you even want to call it a console cycle anymore. All I know is it’s going to be a hell of a decade.
By the time 2013 comes to a close, the returning VR industry will be back in full swing — this time as part of the multi-billion dollar games industry.
It’s now obvious that immersive virtual reality is finally back in the consumer market — with a vengeance.
Especially with the recent advent of FOV2GO, a free DIY portable fold-out iPhone and Android viewer that turns the smartphone screen into a 3-D VR system.
You can create one with foamboard and 2 cheap plastic lenses, and downloadable software lets you create your own virtual worlds or environments to display.
(There’s also an iPad3 version.)
What made this possible: high-resolution screens and built-in gyroscopes. The retina display on the iPhone 4, for instance, provides 960×640 DVGA high resolution on a compact screen.
That means we can now construct ultra-lightweight VR head-mounted displays. The gyroscope lets the device track your head movement so you can look around in real time.
FOV2Go is actually a hardware and software kit for the creation of immersive virtual reality experiences using smartphones, tablets, and other mobile devices. More info here.
At the Maker Faire showcase of FOV2GO in May, we showed off a few apps such as Tales from the Minus Lab and Shayd Mobile, both running in the Unity3D game engine with custom C# scripts to create the side-by-side in-game camera.
I created a Mobile version of the Shayd Virtual Reality installation. While the full installation of Shayd encompasses an entire motion capture stage and wide-FOV head-mounted display, Shayd Mobile is much simpler, utilizing the FOV2GO stereoscopic Unity package developed by Perry Hoberman.
The game could also play on an iPad 3 with very large stereoscopic lenses, allowing for a tasty field-of-view approaching 130 degrees.
Considering that the human eye sees at about 180 degrees, this was pretty realistic! You start out walking around with a flashlight in a dark office, looking for a light switch. Using other inputs on the iPad, you can click on letters on the desk and read them by lantern, or use your flashlight. Eventually you can pull out your gun and fire bullets!
The epicenter for FOV2GO (and other cool projects, including motion capture with the Kinect) is the Mixed Reality Lab (MxR), part of the Institute for Creative Technologies at the University of Southern California (USC).
But there’s something coming that’s even cooler. Much cooler.
People have always been dreaming about virtual reality since Neuromancer and Snow Crash, and in the late 90’s it really captured the public imagination. VR companies were popping up left and right, but the technology wasn’t quite there yet, and the industry crashed and burned around the same time as the dotcom bubble.
Now that it’s experiencing a resurgence fifteen years later, a ton of pseudo-VR devices are coming out that don’t really make any sense.
There’s been a ton of tinkering with controllers, motion devices, stereoscopy, and the like to make gaming a bit more interesting. Microsoft, Sony, and Nintendo have all developed devices that encourage players to get off the couch and into the action. Like the Kinect, the Playstation Move, the Wiimote, the Razer Hydra for PC, and most recently, the Wii U.
Sony, Vuzix, Senics, and NVIZ are all trying to push towards the consumer market, but their HMDs (head-mounted displays) — like the Sony HMZ-T1 and the Vuzix WRAP 920 — are either too expensive or too low-quality, and they have a narrow field-of-view, so you have no sense of peripheral vision.
There’s also no built-in gyroscope or head-tracking, so even if you did play a game with it, you couldn’t move your head around in the virtual environment.
These consumer HMDs feel like a floating television that you are looking at from 5 feet away. So it’s not sparking any real innovation in the games industry, or first-person shooters. Besides, if it looks just like a TV, why not just watch a TV?
Immersive virtual reality hardware
But I found something at the recent E3 Expo 2012 that I’ve been evangelizing since I was a kid. That I know in my heart of hearts is the future — the immediate future — of gaming.
Immersive virtual reality hardware — with real HMDs — has finally arrived. And it’s about to make a huge splash in the first-person shooter hardcore gaming niche.
More in my next post….
OK, this one pushes me over the “Onion threshold,” to coin a term.
Hey, I’m not making this stuff up — it comes from IEEE Spectrum, a credible source, and it’s not April 1!
Anyway, it turns out Yamagata University researchers are developing a robot to make sure you’ll never, ever have to be alone again.
Think of it as Facebook Version 23 meets Revenge of the Nerds.
MH-2 (that’s “MH” for “miniature humanoid”) is a wearable telepresence robot that acts as an avatar for your remote friend, who’s also terrified of being alone. Know what? I don’t want to be alone with the creepy little robot, OK?
MH-2 is designed to be able to mimic human actions as accurately and realistically as possible. Think Telenoid, except it can actually do stuff besides wiggle around semi-creepily, as Spectrum puts it.
Instead of having said friend come along with you on a trip, for example, you can bring along an MH-2, explains Spectrum. Back home, your friend puts on a 360-degree immersive 3D display and stands in front of some sort of motion capture environment (like a Kinect, for example). Then, they get to see (shudder) whatever the MH-2 sees.
Meanwhile, the robot on your shoulder acts like an avatar, duplicating the speech and gestures of your friend right there for you to interact with directly.
The 20-DOF Miniature Humanoid MH-2: a Wearable Communication System, by Yuichi Tsumaki, Fumiaki Ono, and Taisuke Tsukuda, from Yamagata University in Japan, was presented this month at the 2012 IEEE International Conference on Robotics and Automation, in St. Paul, Minn.
Shortly after that, tiny robot outfits mysteriously showed up in K-Marts nationwide. OK, I made that part up.
Update: Kidding aside, a future version of this innovative tech could possibly find its way into future 3D versions of gadgets like Google Glass.
What? You thought distracted drivers texting on cell phones and swerving erratically is a problem? That’s so 2011.
Imagine a future in which icons flash on your car windshield, hologram-style, as your car approaches restaurants, stores, historic landmarks or the homes of friends, effuses CNN.
Simply point your hand at them, and the icons open to show real-time information: when that bridge over there was built, what band is playing at that nightclub on the left, whether that new café up the street has any tables available. Wave your hand again, and you’ve made a restaurant reservation.
Wow, it’s like Second Life, or Fringe, or something, dude!
BTW, note the text message in the photo above: “On my way to HiDive Bar!” Oh yeah, now there’s the perfect combo: AR, booze, and driving.
“Driver, put down the Kinect and step out of the vehicle!”
‘Driving while intoxicated’
Mercedes-Benz showed off this wacky vision of the future of driving at CES in Vegas, where wacky is the new normal. (Mercedes’ “mBrace” — get it? — a $200/year system that will let drivers run customized apps like Yelp and Facebook controlled by voice commands or on a dashboard touchscreen), targeted to Generation Y.
We’re working on a new generation of vehicles that truly serve as digital companions,” said Dieter Zetsche, head of Mercedes-Benz Cars, in a keynote speech at CES. “They learn your habits, adapt to your choices, predict your moves and interact with your social network.” OK, “My Mother the Car” on acid. Got it.
Not to be all uptight or anything, Ford also plans to display real-time discounts relevant to a driver’s location, and Audi and Kia also plan Web-based dashboard entertainment systems.
In December, the National Transportation Safety Board recommended a ban on all use of mobile devices while driving, including hands-free devices, Technology Review points out. “One board member compared the use of phones in cars to driving while intoxicated.”
Well, at least Kia is doing one rational thing: a “user-centered driving concept”: an infrared LED and camera to monitor the driver’s face for alertness. The system would recognize whether the driver’s eyes are opened or closed, safeguarding against an accident caused by the driver falling asleep. OK, that’s nice, but how about recognizing when the driver is distracted by dopey AR displays and applying a taser or something?
Here’s an app I want: one that warns me when a Ford, Mercedes, Audi, or Kia — or one of those autonomous cars — is approaching so I can swerve the hell out of the way.