The Joint Programme of Work
Research Objectives
- The goal of the VIRGO network is to coordinate European research and
postgraduate training activities that address the development of intelligent
robotic systems able to navigate in (partially) unknown and possibly dynamic
environments. This goal will be achieved through a framework which enhances
RTD activities in European laboratories, already established in the aforementioned
scientific area. Specifically, in pursuing this goal, alternative environment
representations based on visual information will be studied and methods
to process these representations and use them to control the motion of
the mechanical parts of a robot will be developed. Although VIRGO will
focus on the use of visual information, other sensors will also be considered
in order to simplify and facilitate certain tasks. Environment learning
issues will also be considered within VIRGO, aiming at autonomous acquisition
of the discriminating environment features.
Methodological Approach and Work Plan
- VIRGO will focus on the use of visual information in robot navigation,
that is information acquired from a sequence of images. This information
will, however, be combined with information provided by other sensors (e.g.
sonar, IR, tactile) to simplify certain tasks. In acquiring and processing
the visual information, our approach is based on the Purposive and Qualitative
vision paradigm, which offers many advantages for the particular task.
In order to approach the coordination of the VIRGO Network in a tractable
way, it is broken down into tasks (T1-T8) with specific (sub)goals. The
systematic and in-depth execution of these tasks will provide solutions
and insights to the problems involved. Emphasis will be given to the integration
of methodologies, since the goal of autonomous navigation requires more
than solutions to specific subproblems.
Tasks
- T1: Analysis of current methods for robot navigation.
- T2: Visual competences.
- T3: Landmark recognition.
- T4: Visual memory organization.
- T5: Sensor Fusion.
- T6: Strategic/Tactical Planner.
- T7: Learning methodologies.
- T8: Transfer and Exploitation.
Participation in the Tasks of VIRGO
 |
FORTH |
AUC |
DIST |
TUG |
KTH |
BONN |
INRIA |
GMD |
DIKU |
UNIZH |
| T1 |
All Partners |
| T2 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
| T3 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
| T4 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
| T5 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
| T6 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
| T7 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
| T8 |
All Partners |
Brief technical description of tasks
- T1: Analysis of current methods for robot navigation.
Participation: All Leader: KTH
A systematic analysis of current methods for robot navigation will be performed
to identify problem areas and inadequacies. Although this project focuses
on vision-based navigation, this analysis will also consider non-visual
methods in order to provide a clear understanding of current techniques
and approaches.
- T2: Visual competences.
Participation: FORTH, AUC, DIKU, INRIA, KTH, TUG Leader:
FORTH
A number of visual cues have a definite impact on robot motion. Therefore,
the system should possess a number of competences, that is abilities to
capture aspects of the whole visual information. In this task, the following
competences will be studied: egomotion estimation, independent motion detection,
depth estimation, time to contact with points/curves/areas and even with
sheer gray-levels, tracking and visual homing. Our general approach in
this task will be ``qualitative'', since the robot has only partial knowledge
of the parameters of its vision system. Moreover, we will consider how
and what invariant information can be recovered from the images and efficiently
used in mobile robotics applications. The above approach will be based
on the assumption of an active observer, possessing full control of the
image acquisition process (active vision paradigm).
- T3: Landmark recognition.
Participation: INRIA, FORTH, GMD, AUC, KTH, BONN Leader:
INRIA
Robot navigation will be primarily based on landmarks, that is characteristic
aspects of the environment that can be used to describe it. Each landmark
is described by a set of features; following the recent trends in computer
vision and in order to achieve robust recognition, qualitative descriptors
will be studied rather than quantitative. Candidate descriptors are ordinal
depth, qualitative surface descriptors, shape, texture, color, etc. Landmarks
will initially be identified manually; however, in coordination with T7,
automated landmark identification (learning) will also be studied. Moreover,
since landmark recognition cannot be studied independent of memory considerations,
T3 will run in close coordination with T4.
- T4: Visual memory organization.
Participation: AUC, FORTH, DIST, UNIZH, GMD, INRIA Leader:
AUC
The system's knowledge of the environment is specified by the contents
of a visual memory (VM) which contains a large number of patterns corresponding
to landmark descriptions known to the system. Patterns can also encode
other information, e.g. the direction of the robot motion when this pattern
(landmark) is encountered, hints for the other landmarks, the position
of the corresponding segment of the environment with respect to a specific
reference system, etc. VM contains descriptions of the local environment
in reference to coordinate systems defined with respect to the landmarks.
Therefore, it substitutes the 3-D geometric representation of space (traditionally
used) with a set of selected space views with known topological relationships.
- T5: Sensor Fusion.
Participation: TUG, DIKU, BONN, KTH, UNIZH Leader: TUG
Vision alone provides useful information for space perception and navigation.
However, certain tasks can be simplified and approached more effectively
by employing other sensors. The case of an object approaching the robot
sideways can be easily accounted for by using a ring of sonars around the
robot and contact with objects is trivially detected with tactile sensors.
The information provided by other sensors will be integrated with the visual
information, complementing it or even replacing it (in case of emergency,
a non-visual sensor might take over the robot control temporarily). Fusion
of peripheral and central vision will also be considered under this task.
- T6: Strategic/Tactical Planner.
Participation: GMD, AUC, DIST, FORTH Leader: GMD
Based on representations of the environment, motion planning is performed
at two levels. At the higher level, a strategic planner is responsible
for specifying a sequence of intermediate goals, the accomplishment of
which results in the final goal. The intermediate goals correspond to landmarks
that have been chosen by the system through a learning process. The strategic
planner uses approximate descriptions of the environment and, therefore,
its plans are not always achievable. A tactical planner at a lower level,
navigates the robot between two intermediate positions that are specified
by the strategic planner. The tactical planner considers the current status
of the environment (e.g. obstacles) and uses appropriate visual competences
(T2) to accurately perform its task.
- T7: Learning methodologies.
Participation: DIST, GMD, FORTH, BONN, UNIZH, TUG Leader:
DIST
Above, a teacher has been assumed to identify the discriminating characteristics
of the environment (landmarks). This task will aim at investigating issues
related to spatial learning (``iconic'' navigation), that is the possibility
of a robot to be ``shown'' a path (e.g. using a joystick or following -tracking-
a person) and to be able to repeat the same path (or paths with similar
characteristics) on the basis of information stored during the ``teaching''
phase and on the current images. In particular, we intend to investigate
how to (a) ``condense'' the iconic information (take images at certain
times), (b) model and define the sensor - motor interaction, (c) handle
``memorized'' images, (d) cope with changes in the environment after the
teaching phase, and (e) combine the iconic approach with explicitly placed
visual landmarks.
- T8: Transfer and Exploitation.
Participation: All Leader: FORTH
Evaluation of results from the point of view of possible industrial exploitation.
This task will also involve industrial sponsors of VIRGO, who will also
be encouraged to interact with VIRGO participants during all phases of
the project programme.
Schedule and Milestones
- The duration of VIRGO will be 36 months. The time-schedule of the project
is shown in the following chart.
- Milestones:
- End of Month 12. Joint report on state-of-the-art approaches
in vision-based robot navigation.
- End of Month 18 (Mid-Term). Joint progress report on T2 and
planned VIRGO enhancements to specific local demonstrators. Specifications
of integration requirements for such enhancements to be effective.
- End of Month 24. Joint report on the work conducted in visual
competences, landmark recognition and visual memory organization.
- End of Project. Joint report on the work conducted in the VIRGO
Project. VIRGO enhancements to local demonstrators.
The above reports will also form the basis for internal discussions
and relevant joint publications.
Research Effort of the Participants
- The likely research effort that each Participant will contribute to
the joint programme of work is summarized in the following table:
| Participant |
Network Financing |
Other Sources |
Total MM |
| FORTH |
48 |
42 |
90 |
| AUC |
36 |
36 |
72 |
| DIST |
52 |
48 |
100 |
| TUG |
38 |
27 |
65 |
| KTH |
54 |
48 |
102 |
| BONN |
42 |
42 |
84 |
| INRIA |
52 |
42 |
94 |
| GMD |
30 |
36 |
66 |
| DIKU |
36 |
36 |
72 |
| UNIZH |
34 |
36 |
70 |
| Totals |
422 |
393 |
815 |
Involvement of Industry
- VIRGO has already achieved the endorsement of industrial sponsors.
Five companies have shown strong interest in the goals and objectives of
the VIRGO Network; these are ALCATEL Espace (France), ITMI APTOR (France),
AITEK s.r.l. (Italy), Robosoft (France), and TELEROBOT (Italy). Moreover,
the Intelligent Systems Group from Systems Technology Research in Daimler
Benz AG (Germany) is highly interested in actively participating in VIRGO
with its own budget and resources. These companies, as well as others that
will be contacted in the early phases of VIRGO, will be invited to contribute
to the early discussions and the definitions of specific goals within the
project's tasks. Moreover, companies which develop robotic platforms and
vision equipment will be invited to contribute to network workshops.
The industrial involvement outlined above will help focus the network
research efforts towards issues entailed in technology applications in
the autonomous navigation area.
Last change: Jul. 23, 1996
Back to the VIRGO home page