Progress of the 3DTV Project: Some Intermediate Results
Here are the highlights of some intermediate results:
Many different candidate technologies are comparatively assessed. It is concluded that multiple synchronized video recordings are
currently the most promising technology. Other alternatives like single camera techniques, pattern projection based approaches and holographic
cameras are also under investigation.
A robot equipped with a laser scanner and an omnidirectional camera captures 3D structure of the environments as it travels. The system
is based on a stereo technique and calculates dense depth fields.
Many experimental multi-camera capture systems are designed and tested. Synchronization among the cameras is achieved. 3D models of fire
and smoke are developed and reconstructed.
Many techniques are developed to generate automated 3D personalized human avatars from multi-camera video input.
Image-based methods are developed for surface reconstruction of moving garment from multiple calibrated video cameras.
A method, based on synthetic aperture radar techniques, is developed to increase the resolution of CCD based holographic recording.
Algorithms are developed for denoising interference patterns to improve the accuracy of reconstructed 3D scenes from holographic
Signal processing methods are developed for automated detection of face, facial parts, facial features and facial motion in recorded
A method for generating and animating a 3D model of a human face is developed.
A method to track gesture features and speech-correlated gesture analysis is developed.
A technique for interactive view-dependent streaming of progressively encoded 3D models over lossy networks is developed. The method
is based on progressive octree representation.
A method to represent 3D objects using multiresolution tetrahedral meshes is developed.
An algorithm for reconstructing a 3D environment from images recorded by multiple calibrated cameras is developed.
A novel approach for digitizing archeological excavation sites in 3D from multiple views is developed.
Comparative quality assessments for different 3D reconstruction methods are conducted.
Methods for filtering to eliminate the jitter in captured motion data are developed.
A technique is developed to recognize head and hand gestures; the method is then used to synthesize speech-synchronized gestures.
A method for representing scalable 3D image-based video objects is developed.
Objective quality assessment methods for 3D video objects are developed.
Software tools for easy description of 3D video objects are developed.
Different 3D video synthesis methods are evaluated.
One of the first comparative tests in the literature between point- and surface-based representations is obtained, indicating the
rate-distortion efficiency of mesh-based representations over dense depth fields.
Joint research continued for obtaining constant topology and fixed connectivity time-varying surface representations.
An algorithm, based on volumetric representations, was proposed for extraction of 3D either for stereo or multi-view data via representing
the scene as parallel planes and sweeping this volume by the help of these planes, as well as applying angle sweeping for obtaining better accuracy.
An emotion analysis-synthesis system was developed for more realistic head and shoulder animations with some academic outputs, as well as a
demo in 3DTV NoE webpage.
Research was carried out on object-specific representation with emphasis on cloth modelling and fluid animation, on head animation and on
pseudo3D representations on developing a framework for 3D video object synthesis, as well as proposing scalable representations for image-based video
Coding and Compression:
A technique to automatically segment stereo 3D video sequences is developed.
A method for optimal rate and input format control for content and context adaptive streaming video is developed.
An approach for minimum delay content adaptive video streaming over variable bit-rate channels is developed.
Rate control techniques for 3D video streaming are developed.
A full end-to-end multi-view video codec is implemented and tested.
A comparative study for investigating the effects of different disparity maps and their properties in an embedded JPEG2000 based disparity
compensated stereo image coder is completed.
The effects of various lossy coding techniques on stereo images are investigated.
A storage format for 3D video is developed.
Statistical properties of spatio-temporal prediction for multi-view video coding are evaluated.
NoE partners actively participated in MPEG standardization activities for 3D video.
Techniques for coding multi-view video are developed.
A proposal submitted to MPEG for multiview video coding by a Partner of our project performed best in subjective tests among eight other
proposals from different parts of the world.
Available MPEG tools are evaluated for multi-view video synthesis.
Multi-view test data sets using arrays of eight cameras have been produced and made available to MPEG and general scientific community.
Various 3D mesh compression techniques are developed and tested.
Methods for coding and rendering free-view point video are developed.
Representation for 3D scenes for interactive applications is investigated.
Compression techniques for holograms are developed.
Multiple description coding techniques for 3D are investigated.
Watermarking techniques for 3D are developed. The topic of free viewpoint watermarking was studied by researchers from the NoE for the
first time worldwide.
An optimal cross-layer scheduling for video streaming is developed.
An optimal streaming strategy under rate and quality constraints is developed.
Different approaches for error concealment in stereoscopic images are developed and compared.
Color and depth representation based end-to-end 3DTV is further developed and tested.
Effects of noisy channels on 3D video transmission are investigated.
Application of turbo codes to 3D is investigated.
Packet loss resilience over wired and wireless channels and concealment techniques are being developed.
The collaborative work on demo platform continues to enhance the performance of the system and extend it to work on wide area and wireless
networks. Work continues on supporting different 3D video formats and displays.
Signal Processing Issues in Difraction and Holography:
Diffraction and holography is revisited from a signal processing point of view.
Analytical solutions for complex coherent light field generation by a deflectable mirror array device are developed.
Methods to compress holographic data are developed.
Fast methods to compute diffraction between tilted planes are developed and tested.
Algorithms to compute 3D optical fields from data distributed over 3D space are developed and tested.
Phase-retrieval methods for 3D measurements are investigated.
Effects of sampling of the diffraction field to the reconstructions are solved.
Basic mathematical tools to investigate diffraction related signal processing problems are developed.
Autostereoscopic displays for 3DTV are further developed.
Viewer tracking autostereoscopic displays are further developed.
Characterization and calibration techniques for various spatial light modulator based holographic displays are developed.
Polymer dispersed liquid crystals are investigated for display applications.
Switchable materials are investigated for dynamic holographic displays.
Laser scanning techniques are being investigated
Go back to the web show-case