• Zielgruppen
  • Suche
 

A Predictor-Corrector-Method for the Robot-assisted Assembly of Optical Systems

Prototypical setup of a Michelson interferometer
A misaligned optical component leads to a distorted wavefront
Macro-micro-manipulator with gripper for placing optical components

Motivation 

Nowadays, an automated and individual assembly of optical system is not possible yet. This is due to increasing demands on miniaturization and high rejection rates during the assembly. To overcome this problem, optical systems integrate active and/or passive adjustment mechanisms for each critical optical component to maintain the high demands on tolerances in order to preserve the system’s functionality (such as in interferometric devices). Unfortunately, in turn, the active adjustment may also lead to increased production and labor costs.

Objectives

This research project deals with the function-oriented assembly of optical system which shall lead to lower tolerances on optical components and positioning systems. Furthermore, lower rejection rates of optical components are to be realized during the assembly .

Approach

In order to realize the aforementioned goals, a predictor-corrector-method is employed for the assembly process. At all times in the sequential assembly process, a simulation model runs in parallel that constantly adapts to changes in reality with the help of identification methods. This enables prediction for the future position of optical components (prediction step). By analyzing demanded tolerances (such as wavefront deviations), a correction of the nominal position can be calculated on the basis of the simulation model (correction step) in order to ensure the functionality of the system at the end of the manufacturing process.

Experimental Setup

The experimental setup consists of a positioning system and an optical system which needs to be assembled. A macro-micro-manipulator including gripper is employed as positioning system which is characterized by a large workspace and high positioning precision. A Michelson interferometer is prototypically employed as optical system which has a wavefront sensor at the end of its optical train. The wavefront sensor can be utilized to infer the position of optical components via identification methods and therefore adapt to the simulation model to adhere to reality as close as possible.

 

Contact: Dipl.-Tech. Math. Christopher Schindlbeck

Image-based control of an optomechanical derotator for the measurement of rotating components

Image derotator at the Institute of Measurement and Automatic Control
Result of a measurement on a roller bearing carried out with an infrared camera: thermographic measurements are an excellent example for measurements in the presence of motion blur due to the long exposure time prescribed by the temperature range
Result of a measurement on a blisk model carried out with a laser Doppler vibrometer: the measurement beam can be focused on one point on the measurement object with the help of the derotator enabling more precise vibration measurement results

Rotating machine components are widely spread in technical applications. To guarantee efficient and safe operation it is necessary to perform regular investigations of the components. Proper functioning of machine components is best demonstrated by metrological investigations especially carried out contactless during operation. Thus, the results are precise since they are not falsified by the measurement system.

Conventional measurement methods quickly reach their limits for machine components in motion, in particular if the components are rotating. In case measurement data is acquired by standard high speed cameras capturing visible light or thermographic cameras the reason lies in the occurrence of motion blur. If objects are rotating with a high velocity or long exposure times are necessary (due to bad illumination condition or the operation principle of the camera) streaking of the moving object is apparent. Furthermore, focusing measurement methods emitting measurement beams (like laser Doppler vibrometry) into one point on an object is impracticable due to the rotational movement making measurement results imprecise. A solution to this challenge is provided by on optomechanical derotator. The derotator optically generates a stationary image of a rotating object by a rotating reflector assembly.

For stationary images it is essential that the position/angular velocity of the derotator are synchronized with half the position/angular velocity of the measurement object. At the Institute of Measurement and Automatic Control (IMR) this is achieved by an image-based highly dynamic controller using full state feedback. Features on the measurement object (either specific structures on the object or additional makers) are first extracted and then tracked to calculate the angular position/velocity. The so calculated values are then used as the feedback source for the controller of the derotator.

Besides, it is necessary to align the optical axis of the derotator with the rotational axis of the measurement object. For this purpose the derotator is mounted on a hexapod, which is able to precisely change its position in six degrees of freedom. At the IMR optical methods are utilized to eliminate translational and rotational deviations of the derotator in a two-stage process.

If those conditions are met the derotator can be used for a variety of measurement tasks. At the IMR those metrological investigations are set up, carried out, analyzed and evaluated. Especially measurements in the presence of

  • motion blur due to long exposure times and high rotational velocities
  • emitting measurement beams (e.g. laser vibrometry)
  • high frame rates

are improved or even made possible with the help of the derotator.

A Simultaneous Localization and 3D Mapping (SLAM) System using Airborne Cameras

Recently, there has been an increasing need for on-line 3D environment reconstruction methods in a wide range of applications, like robot navigation and mapping, and augmented and virtual reality. In order to implement this, the concept of Simultaneous Localization and Mapping(SLAM) has been proposed, which refers to the solution of keeping track of an agent relative to its environment and in the meantime, building a 3D model of the environment.

Considering huge advantages in cost and accessibility, visual sensors, like color camera, are widely employed by current SLAM systems, i.e. visual SLAM(vSLAM). However, most current vSLAM systems have used traditional image descriptors for pose estimation, like ORB-SLAM using Oriented FAST and Rotated BRIEF(ORB) descriptors, which often do not obtain good results when running on scenes without rich textures. Besides, a dense/semi-dense, high-precision 3D environment reconstruction is usually quite difficult to be achieved by already existing SLAM solutions.

In contrast to traditional indirect (based on features) vSLAM solutions, this research project is aimed to develop a direct vSLAM system, which allows to build at least semi-dense, consistent 3D models of the environment.

In this project, we use the colored data of images directly to achieve direct image alignment and to obtain the relative transformation, without implementing conventional time-consuming features extraction and description processes. Based on an estimated poses, the scene depths can be recovered and updated. Then, a sliding-window optimization strategy will be employed, where a local state information matrix will be calculated and the estimated errors can be further minimized. Finally, if a dense, high-precision 3D reconstruction is required, a standard global optimization method, like Bundle Adjustment (BA), can be applied.

The experimental setup consists of a unmanned aerial vehicle (UAV), an on-board mini-PC, a computer on the ground and a time-of-flight (tof) camera. In order to avoid the scale-drift problem in monocular SLAM system, we employ a tof camera as the visual sensor, which can provide coarse depth measurements as initial values. To keep this system running in real time, the algorithm can be dived into two parts: pose estimation and mapping algorithm on mini-PC, and optimization algorithm on ground PC. The data transmission between on-board PC and ground PC can be achieved using a data link.

 

Contact: M.Eng. Hang Luo

[Translate to Englisch:] DJI MAVIC PRO Drone
[Translate to Englisch:] Kinect XBOX ONE

3D geometry detection of rotating components by means of rotationally symmetrical fringe projection

System diagram

In industry, a common and important kind of dynamic object is rotating machinery part. It is of interest to know the 3D geometry information of rotating part under operational conditions. The fringe projection (FP) technique, with the advantages of full field, high accuracy and high point density, plays an important role in dynamic 3D shape measurement. The aim of this project is to develop a rotationally symmetric fringe projection (FP) system that can accurately measure the 3D shape of fast rotating objects. However, for objects with high rotational speed, the rotating movement in the duration of measurement will lead to dramatical image blur, which make the measurement become unreliable or invalid. In order to enable measure objects with high rotational speed, the optomechanical image derotator is introduced to compensate for the rotation movement. Three major issues will be addressed in this project, including system design considering optical image derotation, phase recovery with short sequence fringe patterns and error compensation, and system modeling and calibration based on the hybrid model.

This project is supported by the Humboldt Research Fellowship for Postdoctoral Researchers.

Contact: Dr.-Ing. Yongkai Yin