No-Code Robot Auto Programming

Using NVIDIA Isaac Sim's visual scripting framework, we completely lower the barrier to robot programming.
Through an intuitive node-based environment instead of complex text coding, both engineers and non-experts can quickly and accurately design and deploy robot tasks.

PRINCIPLES

OmniGraph Node-Based Design

No-Code auto programming is realized through OmniGraph, NVIDIA Omniverse's powerful visual programming framework.
This system abstracts all functions—robot motion, sensor input, AI decisions, external device control—into 'Nodes', allowing users to connect these nodes and directly design the logic flow.
  1. Intuitiveness

    Configure robot motion sequences and conditional logic by connecting blocks, without knowing complex Python or ROS code.
  2. Modularity

    Functions such as reading sensor data, moving robot arms, and opening grippers are contained in individual nodes for easy reuse, simplifying debugging and maintenance of robot systems.

FEATURES

Key Features of Simvis No-Code Automation Solution

  1. Breaking Entry Barriers Dramatically

    Without knowing the complex syntax or structure of text-based coding, anyone—planners, field workers, designers with no robot programming experience—can directly implement and modify robot logic.

    Required Knowledge :
    Zero Programming Knowledge
    Target Audience :
    Effect :
    Elimination of Robot Expert Dependency
  2. Instant Visual Feedback

    Without code modification, results are immediately reflected in the simulation environment the moment nodes are connected. Live editing and real-time debugging dramatically accelerate prototype creation and idea validation speed.

    Development Method :
    Drag & Drop
    Feedback :
    Visual, Real-Time Reflection
    Advantage :
    Reduced Prototyping Time
  3. Motion-Based Auto Programming

    When you directly demonstrate motions to the robot (Ghost Teaching) in the simulation environment, the pose data is automatically converted into robot programs. Path planning and collision avoidance algorithms are integrated into nodes and auto-generated.

    Program Generation :
    Motion Demonstration-Based Automation
    Provided Functions :
    Path Planning, Collision Avoidance
    Effect :
    Rapid Deployment of Complex Tasks
  4. Flexible Environment Adaptability

    Reliably recognizes and measures object positions even in various non-standard work environments such as lighting changes, inter-object reflections, and irregularly stacked objects (Bin Picking).

    Target :
    Bulk Parts, Irregularly Stacked Items
    Sensor :
    High-Power LED or Laser Light Source Support
    Advantage :
    Essential for Flexible Manufacturing Systems

PROS & CONS

Advantages & Considerations

Productivity
Advantages
Reduce robot deployment and task change time from days to minutes by eliminating coding and compilation time
Considerations
High platform dependency makes it difficult to use outside the software environment (Isaac Sim/Omniverse)
Accessibility
Advantages
Field workers without coding experience can control robots, enabling flexible workforce management and improved productivity
Considerations
Text coding is advantageous for implementing very complex and fine-grained logic such as complex types or custom functions
Debugging
Advantages
Visualize logic flow through node connections for intuitive and fast error point identification when problems occur
Considerations
Experienced programmers may have reduced productivity due to extensive mouse operations
Integration
Advantages
Various peripheral devices such as vision systems, sensors, and grippers are integrated as nodes for easy connection
Considerations
Visual scripting often uses interpreted methods, which may result in slightly lower performance compared to regular coding

APPLICATIONS

Key Application Areas

  1. Rapid Reconfiguration of High-Mix Low-Volume Production Lines

    When changing production items, simply modify visual logic instead of code to switch robot tasks within just a few hours.

  2. On-Site Training of Collaborative Robots (Cobots)

    Workers directly move robots via teach pendant or GUI to demonstrate paths, and the movements are automatically saved as program nodes.

  3. AI & Vision Logic Integration

    Connect AI model results such as object recognition and defect detection to nodes without coding, easily creating logic for robots to make autonomous decisions and actions.

  4. Virtual Simulation-Based Pre-Validation

    Perfectly pre-validate node logic for collisions, timing, and safety in virtual environments before deploying to real robots.