In-depth understanding of VR, you need to know these technical terms

Regarding VR virtual reality, because it is not yet popular, everyone is not very clear about the concept of VR virtual reality. Here we will clarify these terms. I believe that everyone will have a deeper understanding of VR virtual reality.

virtual reality technology

1, microlens array display technology

Through the in-depth study of the microlens array structure, the principle of magnifying the micropattern by the microlens array is revealed. On this basis, the relationship between micro-lens array structure parameters, micro-pattern structure parameters and micro-pattern array moving speed, moving direction and magnification is found. Micro-lens array is used to realize micro-pattern magnification, dynamic and stereoscopic. display.

2, near-eye field display

The new head-mounted display device developed by NVIDIA is called "Near-EyeLightFieldDisplays", which uses some components of Sony's 3D head-mounted OLED display HMZ-T1, and the peripheral structure is made using 3D printing technology. Manufacturing.

The near-eye field display uses a micro lens array with a focal length of 3.3mm to replace the optical lens group used in the previous similar products. This design successfully reduces the thickness of the entire display module from 40mm to 10mm, making it easier to wear. At the same time, using NVIDIA's latest GPU chip for real-time source ray tracing, the image is decomposed into dozens of different view arrays, and then the image is restored and displayed in front of the user through the microlens array, so that the viewer can be like As in the real world, stereoscopic images are naturally observed from different angles through the eyes.

Since the near-eye field display can restore the environment in the picture through the microlens array, it is only necessary to add vision correction parameters during the operation of the GPU, which can offset the influence of visual defects such as myopia or farsight on the viewing effect, which means "glasses" The family can also enjoy this real and clear 3D picture with the naked eye.

3, the field of view

In an optical instrument, the angle of the lens of the optical instrument as the apex, and the object image of the object to be measured can be formed by the two edges of the largest range of the lens, which is called the angle of view. The size of the field of view determines the field of view of the optical instrument. The larger the field of view, the larger the field of view and the smaller the optical magnification.

In layman's terms, the target object will not be captured in the lens beyond this angle. In a display system, the angle of view is the angle between the edge of the display and the point of view (eye).

4, naked eye 3D

The naked eye 3D is a feature that has parallax in both eyes, and can obtain a realistic stereoscopic image with space and depth without any auxiliary equipment (such as 3D glasses, helmets, etc.). From a technical point of view, the naked eye 3D can be divided into three types: light barrier lenticular lens technology and pointing light source. The biggest advantage of the naked-eye 3D technology is that it is free from the constraints of glasses, but there are still many shortcomings in terms of resolution, viewing angle and visual distance.

5, HMD

A head-mounted visual device (HeadMountDisplay) is a type of virtual display, also known as a glasses-type display, portable theater. It is a popular name, because the glasses-type display is shaped like a pair of glasses, and is designed to display video images of audio and video players for large screens, so the image is called video glasses.

Video glasses were originally military demand and applied to the military. The current video glasses are like the stage and status of the big brother mobile phone. In the future, the 3C fusion development will achieve very rapid development.

6, HMZ

As of April 24, 2015, Sony announced the discontinuation of the HMZ series of products, the series has launched three generations of products, HMZ-T in 2011, HMZ-T2 in 2012 and HMZ-T3/T3W in 2013. The HMZ-T1 display resolution is only 720p, the headset is a virtual 5.1 channel, and a large hub is also to be dragged.

First, it uses two 0.7-inch 720p OLED screens. After wearing the HMZ-T1, the two 0.7-inch screens look like a 750-inch giant screen at a distance of 20 meters. In October 2012, Sony released a small version of the HMZ-T1, the HMZ-T2.

Compared to the HMZ-T1, it reduces weight by 30% and eliminates the built-in headphone design, allowing users to use their favorite headphones. Although the screen maintains the 0.7-inch 720p OLED parameters unchanged, but introduced 14bitRealRGB3 & TImes; 3-color transformation matrix engine and a new optical filter, the picture quality is actually enhanced.

The 2013 HMZ-T3/T3W upgrade is not small, the first wireless signal transmission, allowing you to wear a wireless version of the HMZ-T3W for limited small-scale movement, no longer bound by cables.

7, ray tracing algorithm

To generate a visible image in a 3D computer graphics environment, ray tracing is a more realistic implementation than ray casting or scanline rendering. This method works by backward tracking the optical path intersecting the imaginary camera lens. Since a large amount of similar light traverses the scene, the scene visible information and the software-specific lighting conditions seen from the camera angle can be constructed. The reflection, refraction, and absorption of light are calculated when the light intersects an object or medium in the scene.

Ray-tracing scenes are often described by programmers using mathematical tools, or by visual artists using intermediate tools, or by using images or model data captured by different techniques such as digital cameras.

8, real rendering technology

In the virtual reality system, the requirements for real rendering technology are different from the traditional realistic graphics. Traditional drawing only requires graphic quality and realism. However, in VR, we must achieve the update speed of the graphic display is not less than the user's. The speed of visual transition, otherwise there will be hysteresis of the picture.

Therefore, in VR, real-time 3D rendering requires graphics to be generated in real time, and must generate no less than 10 to 20 frames per second. It also requires its authenticity and must reflect the physical properties of the simulated object. Usually, in order to make the scene scene more realistic and real-time, texture mapping, environment mapping and anti-aliasing methods are usually adopted.

9, image-based real-time rendering technology

ImageBasedRendering (IBR) is different from traditional geometric drawing methods. First, the model is built and the light source is drawn. The IBR directly generates images of unknown angles from a series of graphics, and the images are directly transformed, interpolated, and transformed to obtain scene images of different visual angles.

10, 3D virtual sound technology

The stereo sound we hear in everyday life comes from the left and right channels. The sound effect can be obvious that we feel the plane from us, not when someone calls us behind us, the sound comes from the sound source, and Accurately determine its position. Obviously the current stereo is not possible. The three-dimensional virtual sound is to listen to its tone to distinguish its position, that is, in the virtual scene, the user can listen to the voice, fully comply with the requirements of the hearing system in the real environment, such a sound system is called a three-dimensional virtual sound.

11, speech recognition technology

Speech recognition technology (AutomaTIcSpeechRecogniTIon, ASR) is a technique that converts a speech signal into text information that can be recognized by a computer, so that the computer can recognize the speaker's language instruction and text content. It is very difficult to achieve complete recognition of speech, and must go through several processes such as parameter extraction, reference mode establishment, and pattern recognition. With the continuous research of researchers, using Fourier transform, to spectral parameters and other methods, the speech recognition is getting higher and higher.

12, speech synthesis technology

SpeechtoSpeech (TTS) refers to the technique of synthesizing speech. The voice that reaches the computer output can be expressed accurately, clearly, and naturally. There are two general methods: one is recording/playback, and the other is text-to-speech conversion. In the virtual reality system, the use of speech synthesis technology can improve the immersion of the system and make up for the lack of visual information.

13, human-machine natural interaction technology

In the virtual reality system, we are committed to enabling users to interact with the virtual environment generated in the computer system through the sensory organs such as eyes, gestures, ears, language, nose and skin. The exchange technology in this virtual environment is called It is a human-machine natural interaction technology.

14, eye tracking technology

EyeMovement-based InteracTIon is also known as implementation tracking technology. It can complement the shortcomings of head tracking technology, which is simple and straightforward.

15, facial expression recognition technology

The current research of this technology is far from the expected effect of people, but the research results show its charm. This technique is generally divided into three steps. The first is the tracking of facial expressions. The user's expression is recorded by the camera, and the expression recognition is achieved through image analysis and recognition technology. Followed by the coding of facial expressions, the researchers used the Facial Motion Coding System (FACS) to dissect facial expressions and classify and encode their facial activities. Finally, the recognition of facial expressions, through the FACS system can form a system flow chart of expression recognition.

16, gesture recognition technology

The position and shape of the hand are accurately measured by a data glove (Dataglove) or a depth image sensor (such as leapmotion, kinect, etc.), thereby realizing manipulation of the virtual object by the virtual hand in the environment. The data glove determines the position and direction of the hand and the joint by bending, twisting the sensor and the camber and curvature sensor on the palm, and the depth sensor based gesture recognition is calculated by the depth image information obtained by the depth sensor, thereby obtaining the palm. Data such as the bending angle of the finger, etc.

17, real-time collision detection technology

In daily life, people have established certain physical habits, such as solids can not penetrate each other, objects fall from height to do free fall movement, throw out objects to do flat throwing movement, etc., while also subject to gravity and air flow rate The impact and so on. In order to completely simulate the real environment in the virtual reality system and prevent penetration, it is necessary to introduce real-time collision detection technology. Moore proposes two collision detection algorithms, one dealing with the surface of the triangulated object and the other dealing with the collision detection of the polyhedral environment. In order to prevent penetration there are three main parts. First, the collision must be detected. Second, the speed of the object should be adjusted in response to the collision. Finally, if the collision does not cause the object to separate immediately, the contact force must be calculated and applied until it is separated.

18, three-dimensional panoramic technology

Panorama is the most popular visual technology nowadays. It generates virtual reality technology with realistic images based on image rendering technology. The generation of the panorama is firstly a sequence of image samples obtained by panning or rotating the camera; the image stitching technique is used to generate a panoramic image with strong dynamic and perspective effects; finally, the image fusion technology is used to bring the panorama to the user. Reality and interaction. The technology utilizes the extraction of panorama depth information to restore the three-dimensional information of the real-time scene to build a model. The method is simple, the design cycle is shortened, the cost is greatly reduced, and the effect is more, so it is currently more popular.

Open the door of the "Industry 4.0 + Internet of Things" ecological chain!

What are you waiting for in the torrent of the Internet of Things! "How can you miss this new feast of the Internet of Things?! The 3rd "China IoT Conference" organized by Huaqiang Jufeng's Electronic Enthusiasts Network will be held in Shenzhen on December 2: Global Vision The exclusive view of higher value, more professional technology sharing, more cutting-edge pulsation, and the gathering of well-known companies and elites of the global Internet of Things, you must not miss it! More information Welcome everyone to continue to pay attention to the electronic enthusiast network!" (Click on the picture see details)

Winnowing Machine

Winnowing Machine,Winnowing Rice,Seed Winnowing Machine,Grain Winnowing Machine

Hunan Furui Mechanical and Electrical Equipment Manufacturing Co., Ltd. , https://www.thresher.nl