Great news for 2023:  Franka Production 3 and Franka Research 3 robots can now be enhanced by Roboception’s ‘Eyes and Brains for Robots’. Roboception’s vision solutions are part of the Franka ecosystem. Highly efficient pick-and-place applications, machine tending, bin picking or (de)palletizing have just become that much easier for your most recent Franka robots. For the quick and easy integration of Roboception‘s rc_visard 3D stereo sensor and innovative rc_reason software modules, Franka provides easy-to-use apps for the Desk software.

rc_visard owners that want to use a Franka robot of the latest generation can simply purchase the license for the Roboception Apps and download them via the Franka World. We recommend to also update the rc_visard with the latest firmware.

If you do not have an rc_visard yet, just contact us via

Watch Franka’s video about the cooperation:

Of course, bricks aren’t exactly boxes – nonetheless, their rectangular shape is what makes them ideal for a successful application of Roboception’s BoxPick solution: In this newly released use case, learn how Kautenburger GmbH (Germany) implemented a vision component for Refrectarios Kelsen S.A. (Spain) that supports the automated de-palletizing of oven bricks from wagons, nevermind the fact that these bricks come in 100+ different shapes and sizes, and tend to change shape and position ever so slightly throughout their production process.

Read how a seemingly simple and cost-efficient modification, flawlessly implemented, reduced the cycle time for a pick-and-place from 18 to nine seconds and hence allows for more efficient handling.

Nowadays, cobots can work together with humans in logistics or production without protective fences. Human-robot collaboration (HRC) is subject to strict standards and rules to ensure the safety of humans at all times. As a result, common HRC systems have to operate at low, reduced speeds or even stop completely when a human approaches.

The goal of the KI4MRK research project, funded by the German “Bundesministerium für Bildung und Forschung”, was to develop human motion prediction using the combination of three deep neural networks (NN). For this purpose, the joint workspace was transformed into a block representation (voxel representation). This allows obstacles to be represented as volumes. Using autoencoders, human poses were preprocessed in such a way that they can be efficiently stored in the system. A second autoencoder is trained using public motion databases, allowing prediction of individual motions. In the final step, a recurrent neural network trained with only a small amount of task-specific data thanks to long short-term memory (LSTM) can still predict complex actions. The developed AI-based motion prediction of actions in voxel space, combined with dynamic task scheduling, allows more efficient design of HRC-systems.

Furthermore, they can be used more effectively and economically by minimizing stop times. After having successfully evaluated this motion prediction technique using two demonstrators last year, we look forward to exploring the further procedural indications of this method.

The results we were able to achieve together with our partners are summarized in this video:

Last month, the rc_visard was part of a Planetary Exploration Mission – or rather, the simulation of one: As part of the ARCHES project, the German Aerospace Center (DLR) and its partners including the Karlsruhe Institute of Technology (KIT) and European Space Agency (ESA) took a small fleet of robots to the Italian Mount Etna volcano in order to simulate robotic exploration and experiments as the project’s ‘Demo Mission Space’.

Mounted on the DLR’s Lightweight Rover Unit (LRU), the rc_visard was used e.g. to reliably localize scientific instruments and tools, and for the rover’s internal environment modelling.

Roboception is an industry partner of the ARCHES project. This Helmholtz Future Project  is the development of heterogeneous, autonomous and interconnected robotic systems in a consortium of the Helmholtz Centers DLR, AWI, GEOMAR and KIT, with future fields of application spanning from the environmental monitoring of the oceans over technical crisis intervention to the exploration of our solar system. For more information on this mission, the project’s Deep Sea Demo Mission and the overall project, visit the ARCHES Project Website…

All images are © DLR.

We are looking forward to hosting a course on the ‘Foundations of Robotics’ together with the Universitat Politècnica de Catalunya on July 13 and 14 at the UPC’s premises in Barcelona.

Held in the context of the 5GSMARTFACT project, the course is designed for its Early Stage Researchers with little or no background on robotics, perception and ROS. However, some seats remain and we are hence opening the registration –if you’re interested (or know someone who might be), read more and sign up here…

5GSmartFact is an MSCA-ITN project whose objective is to study, develop, optimize and assess the deployment of 5G networks for the benefit of industrial automation. 5GSmartFact has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement ID 956670

Quite literally, an eventful June lies ahead for us: The Roboception team is looking forward to (re-)connecting with customers and partners at numerous exhibitions and conferences throughout the month.

Meet us at the following events:

June 01: Deutscher Innovationsgipfel, Munich

Don’t miss our pitch during the early-morning warm-up session, and stop by our mini-exhibit on the show floor for a live intro of our robot vision technology.

June 02: 32. Deutscher Montagekongress, Augsburg

At 1.45 pm, Roboception CEO Dr. Michael Suppa will be presenting – in German – on Applied AI in robotic machine tending (Angewandte KI in der robotergestützten Maschinenbeladung).

June 06-09: Automate Show, Detroit

Expect some interesting live demos using our technology, and do not miss the session on ‘Advances in Robotic Grasping & Picking’ (June 07, 1:30 – 3:15 pm, room 321), where Dr. Michael Suppa will discuss the latest trends in this domain with fellow panelists from across the industry.

June 21-24: Automatica, Munich 

We’ll be presenting our 3D stereo sensors and applied AI solutions on booth A4.304: An excellent opportunity to see what’s new in our portfolio, and to discuss how we can help make your automated production truly flexible.

We’ll also be on stage twice on June 21: 10.30-11.00 am at the Vision Expert Huddles, organized by VDMA in Hall B5 | Stand 111 and 02:10-02:30 pm at the automatica Forum in Hall A5 | Stand 131.

Please contact us for a ticket and/or meeting request.

June 28-30: European Robotics Forum (ERF), Rotterdam

We’re organizing a workshop titled ‘Applied AI in agile production, logistics and lab automation’ on June 28 at 4.10-5.30 pm in Room 4.

The rc_viscore offers 12 Megapixel (MP) resolution for maximum accuracy and level of detail, and is particularly suitable as a sensor component for more complex robotics applications that require a high level of precision coupled with larger workspaces.

Munich-based technology leader Roboception GmbH expands its range of high-performance sensors for industrial 3D image processing in robotics by adding the rc_viscore high-resolution 3D stereo sensor to its product portfolio.

The rc_viscore delivers an image resolution of 12 MP and hence generates a very detailed point cloud as well as depth, confidence and error images. The impressive image quality allows its use in complex automation applications that require high-quality image processing. It is suitable for the reliable detection of small parts with a size of just a few centimeters, even in large detection areas with a working distance of up to four meters – specifications relevant for automated machine loading, for example.

“We simply wanted to put ‘more pixels in the bin’ to further increase the applicability of image processing solutions in automation,” explains Dr. Michael Suppa, co-founder and CEO of Roboception. “We focused on achieving both a high-quality point cloud and a maximum accuracy and level of detail. And, of course, on maintaining the intuitive usability and unique price-performance ratio that our customers appreciate in our products.”

Coupled with Roboception’s rc_cube, the rc_viscore provides the image data for object detection and the computation of grasp points, for example in industrial automation and logistics. The new stereo sensor is compatible with all rc_reason software modules. The already integrated rc_randomdot pattern projector allows the use even with difficult or low-texture objects and provides exceptionally dense depth images.

The compact and robust design allows reliable use in harsh industrial environments. The innovative 3D stereo sensor is designed for an ambient operating temperature of 0°C to 45°C and operates with convective (passive) cooling. The rc_viscore can be mounted stationary as well as mobile, for example on linear axes, enabling an accurate 3D detection of static objects at different positions within a cell.

For the use of the rc_viscore as a high-resolution RGBD camera, Roboception offers the SGM®Producer, which is a GenICam-compatible transport layer. The SGM®Producer can be used with Halcon, with the rc_genicam_api for C++ programmers, with the rc_genicam_driver for ROS and ROS2, and with any other GenICam compatible application.

The rc_viscore is pre-calibrated to the user’s individual workspace prior to shipment, and is hence easy to set up. The low-maintenance and IP54-protected 3D stereo sensor was designed for an intuitive use. The implementation is supported by a comprehensive online documentation.

The rc_viscore is available immediately and is already being used successfully in several pilot applications. Roboception will present the rc_viscore live at automatica 2022 (booth A4.304).

Today’s complex robot vision applications can require reliable high-speed processing/ low latencies, the parallel use of multiple sensors, a seamless integration of user-owned software elements within the vision pipeline – or even all of these, and more, in a single use case. At the same time, space is limited and hardware must be fit for the day-to-day use in industrial environments.

Good news: We’ve got you covered.

The rc_cube I, our industrial edge computer, optimizes the capabilities for our rc_reason software modules and features a UserSpace for the deployment of your own software components – so you will only need this one computer in your cell.

It supports up to four sensors (not limited to our own rc_visards, you could also include e.g. a Basler blaze) that can be operated in parallel to cover different parts of the scene. Each has ist individual settings and vision pipeline, while your configured regions of interest, load carriers, grippers and templates are shared across all pipelines.

At Danfoss, a Danish manufacturer of mobile hydraulics as well as electronic and electrical components, robotic systems equipped with 3D sensors now reliably and precisely recognize and move a large number of different components.

In the new production line implemented by Danish integrator Quality Robot Systems (QRS), a total of six KUKA robots take over work steps that were previously performed manually. For this, the robotic cells must recognize and move up to 100 different components without manual intervention.

Read the full story of how – despite initial doubts – QRS was “able to […] provide our customer with the perfect solution.”

It is becoming an annual tradition: Roboception once again hosts a workshop during the European Robotics Forum.

About the Workshop

Perception is one of the key technologies for flexible production, including applications such as pick and place, machine tending, assembly, finishing, and quality testing. Applications in domains outside traditional manufacturing scenarios are in general more challenging, but the COVID pandemic has highlighted the need to use automation in environments where the health risks for humans are too high, e.g. test sampling and processing facilities.

The introduction of robotics in human-dominated environments demands a fast adaptability of the system. Recent developments in 3D object detection and pose estimation algorithms as well as machine learning opened up new ways in these challenging domains.

Therefore, the combination of machine learning and classical methods aim at providing reliability, robustness and flexibility at the same time to fulfill the requirements. This connection of these methods with innovative perception approaches show great potential for coping with requirements in lab automation, agile production and smart logistics.

In this workshop, use cases from industry are presented and then discussed in an interactive session with the attendees. The main goal is to create synergies and potential collaborations between researchers and industry, to facilitate the introduction of recent perception technologies into new scenarios.