
THE MAGIC DINOBOT
- From Jameson Hunter, an
original TV series idea, germinated in 2016. Jimmy dreams of building a giant
robot ant as a special project, then one day his dreams come true when the robot he
has built is transformed into a living, breathing, companion.
DEVELOPING
A CONVERSATIONAL AUTONOMOUS ROBOT - USING OVER-THE-COUNTER COMPONENTS -
THE STATE OF THE ART
ABSTRACT
The rapid evolution of microcomputing, sensor technology, and artificial intelligence has paved the way for next‑generation companion robots. This white paper examines the feasibility of constructing a large
robotic
hexapod—approximately 2-3 meters long—that can engage with its owner and environment in a natural, conversational manner. By leveraging modern ARM-based processors, high‑performance smartphone chips, and platforms such as Arduino and Raspberry Pi, this paper outlines a design framework that integrates sensor fusion, dynamic locomotion, natural language processing, and context‑aware decision‑making to create an artificial friend with human‑like interaction capabilities.
1.
INTRODUCTION
Robotic companions have long been a subject of scientific research and popular imagination—exemplified by fictional pioneers such as
Jimmy Watson and his ambitious AI robot
project: Antonius
Maximus. Recent advances in over‑the‑counter computing elements have dramatically reduced the barriers to creating advanced autonomous systems. This paper discusses how combining cutting‑edge hardware with robust
AI software can lead to the development of a hexapod robot that not only navigates its physical environment but also interacts socially and respond dynamically as a trusted friend or copilot.
KEY OBJECTIVES OF THIS PROJECT INCLUDE:
Autonomy in Navigation: Achieving robust mobility via a hexapod structure with dynamic adaptability.
Conversational Proficiency: Integrating state‑of‑the‑art natural language processing to afford natural and context‑sensitive dialogue.
Sensor Integration: Employing a diverse array of sensors (visual, auditory, proximity, and more) to enable perception and situational awareness.
Scalable Computing: Utilizing modern ARM and smartphone processors in conjunction with Arduino/Raspberry Pi boards to balance high‑level AI computation with real‑time sensor processing and motor control.
2.
HARDWARE COMPONENTS & PLATFORM INTEGRATION
2.1 Modern Computing Platforms
ARM & Smartphone Processors: Recent ARM-based processors, widely deployed in smartphones, offer impressive computational power for edge processing. Their integrated Neural Processing Units (NPUs) support rapid inference for machine learning tasks on‑device. This capability is essential for real‑time natural language processing, object recognition, and decision‑making.
Raspberry Pi Boards: As an affordable, small‑form‑factor computer, the Raspberry Pi can serve as a central hub for integrating multiple sensors, managing low‑level I/O, and orchestrating communication between subsystems. Its compatibility with Linux and a plethora of open‑source libraries (such as OpenCV and TensorFlow Lite) makes it highly suitable for rapid prototyping.
Arduino Microcontrollers: Arduino boards excel at sensor interfacing and real‑time control. They are ideally suited for low‑latency tasks such as motor control, reading data from proximity sensors, and executing real‑time safety features.
2.2 Mechanical and Actuation Systems
Hexapod Chassis: A hexapod configuration (six‑legged design) offers superior stability and terrain adaptability. The mechanical design would incorporate multiple degrees of freedom per leg (using servo motors and/or stepper motors) to achieve dynamic gaits and obstacle negotiation.
Integrated Motor Controllers: Modern motor controllers, often based on PWM and integrated over ARM architectures, ensure smooth and responsive movements. These controllers can be linked to perception modules that adjust leg trajectories in real‑time using inverse kinematics algorithms.
2.3 Sensor Suite
Vision Systems: High‑resolution cameras (and potentially stereo vision cameras) can provide object detection, facial recognition, and environmental mapping. Combined with deep learning models optimized for ARM processors, these sensors allow the robot to interpret complex visual scenes.
Proximity and Distance Sensors: Ultrasonic sensors, LiDAR, and infrared sensors deliver precise distance measurements to aid in obstacle detection and navigation. These sensors feed data into sensor fusion algorithms that support simultaneous localization and mapping (SLAM).
Audio Input/Output: An array of microphones enables advanced noise cancellation and voice recognition, while high‑quality speakers support natural verbal interactions. These elements are critical for interfacing with AI conversation engines and contextual audio cues.
3.
SOFTWARE, AI, AND INTERACTIVE FRAMEWORK
3.1 Natural Language Processing and Conversational AI
On‑Device NLP: Software libraries such as TensorFlow Lite and ONNX Runtime can run compact versions of NLP models on ARM and smartphone processors. These models, while not as large as their cloud‑based counterparts, can support basic conversation, intent recognition, and sentiment analysis in real‑time.
Cloud Integration for Advanced Processing: For more complex interactions—or to update the conversational model continuously—the robot can be designed to offload heavier NLP tasks to cloud-based services when network connectivity permits. This hybrid approach maximizes responsiveness while leveraging state‑of‑the‑art deep learning models.
Dialogue Management: Frameworks like Rasa or custom rule‑based systems can handle dialogue flow and context retention. Integration with personal memory databases can allow the robot to “remember” interactions and adapt its behavior according to prior conversations.
3.2 Sensor Fusion and Environmental Awareness
Multi‑Modal Data Integration: The robot can utilize sensor fusion algorithms to merge data from cameras, LiDAR, ultrasonic sensors, and other sources, forming a comprehensive real‑time model of its surroundings.
Environment Mapping and Navigation: Algorithms for SLAM ensure that the robot builds and continuously updates a map of its environment. This allows safe navigation in dynamic settings and the planning of paths that adjust to both static and moving obstacles.
3.3 Behavioral and Emotional AI
Personality Modules: In order to function as a “friend” or copilot, the robot’s behavior can be augmented with AI personality modules. These modules leverage historical interaction data to adapt tone, formality, and conversation context to suit the user’s preferences.
Emotion Recognition: Combining facial recognition (from its visual subsystem) with sentiment analysis updates its responses to align with the emotional state of its owner. This fosters a more empathetic interaction akin to human companionship.
3.4 Operating System and Integration Environment
Robot Operating System (ROS): ROS provides a flexible middleware framework supporting a wide range of sensors and devices. Running ROS on platforms such as the Raspberry Pi allows developers to orchestrate communication between high‑level AI modules and low‑level hardware controllers.
Real‑Time Control and Safety Protocols: The integration of Arduino microcontrollers and dedicated motor controllers ensures that real‑time responses (e.g., obstacle avoidance) are not delayed by higher‑level processing, thereby maintaining safety and responsiveness.
4.
ACHIEVABLE LEVELS OF ARTIFICIAL INTELLIGENCE WITH OFF-THE-SHELF COMPONENTS
When combining the above hardware and software platforms, the achievable level of artificial intelligence in a medium‑sized
robotic hexapod may include:
Conversational Interactions: Real‑time speech recognition and synthesis with a friendly, adaptive conversational layer—sufficient for natural dialogue and emotional expression.
Situational Awareness: Robust environmental mapping and navigation through sensor fusion techniques, ensuring safe autonomous operation within complex indoor/outdoor environments.
Adaptable Behavior: Continuous learning from interactions, both in the physical domain (pattern recognition for obstacle avoidance) and in the social domain (adaptation of conversational style), albeit within the limitations set by available processing power.
Hybrid Edge‑Cloud Intelligence: The capacity to process most functions on‑device with the option to leverage cloud‑based resources for more computationally heavy tasks, balancing responsiveness with advanced features.
While this setup does not create artificial general intelligence (AGI)—instead offering a specialized, robust, and friendly assistant—it does achieve a level of autonomy and interactivity that can mirror a “good friend” or copilot in everyday scenarios.
5.
CHALLENGES & FUTURE DIRECTIONS
5.1 Power Management and Portability
Battery Life: Designing for extended operation remains challenging; integrating efficient power management systems is essential, especially given the power demands of multiple high‑resolution sensors and actuators.
Thermal Considerations: High‑performance processing in compact form factors requires careful thermal design to prevent overheating, particularly in outdoor or high‑load scenarios.
5.2 Software Integration and Real‑Time Performance
Latency Management: Maintaining responsive interaction—both for conversational AI and real‑time navigation—requires optimizing data pipelines and ensuring deterministic responses from the sensor–actuator network.
Robust Failure Modes: Ensuring safety through redundant systems and clear protocols for software or hardware failure is paramount for a robot intended to operate in close proximity to humans.
5.3 Scalability and Market Readiness
Prototyping vs. Commercial Production: While many components are available off‑the‑shelf, integrating them into a commercially viable product requires tackling issues such as durability, maintenance, and user customization.
Ongoing AI Improvements: Rapid advancements in AI research mean that periodic updates and iterative design revisions will be necessary to keep the system state‑of‑the‑art.
6.
OVER THE COUNTER COMPONENTS CONCLUSION
The convergence of modern ARM and smartphone processors with widely available development platforms such as Arduino and Raspberry Pi has made the creation of an autonomous, conversational hexapod robot a tangible reality. Such a robot—capable of intelligent sensor fusion, natural language interactions, and dynamic mobility—can serve as an artificial friend or copilot, enhancing everyday life with advanced, context‑aware companionship.
Although challenges remain in power management, real‑time performance, and system integration, ongoing advances in AI and robotics continue to narrow the gap between fiction and reality. The integration of conversational AI with autonomous locomotion not only represents the current state‑of‑the‑art but also signals a future where companion robots are fully integrated into the fabric of daily human activity.
By exploring and harnessing these over‑the‑counter technologies, developers can continue to break new ground in companion robotics—realizing projects once confined to science fiction and making them a reality today.
WHAT
ABOUT AMD RYZEN AND INTEL I9 CORE PROCESSORS?
You may be wondering where AMD Ryzen and Intel i9 core processors fit into the equation, or are they too far behind the curve to catch up with ARM processors, and how might smart android devices benefit from AI
(Copilot) enabled computers. Or is it not worth considering waiting, and how much might a half sensibly performing system, based on off the shelf phones and computer parts cost in today's marketplace?
Below is a detailed discussion that examines where AMD Ryzen and Intel i9 core processors might fit into an intelligent robotics system—and how they compare with current ARM‑based solutions. We also explore how smart Android devices can benefit from AI‑enabled computing and provide some ball‑park cost estimates for systems built from off‑the‑shelf components.
1.
COMPARING AMD RYZEN/INTEL i9 & ARM PROCESSORS
High‑Performance x86 Processors (AMD Ryzen and Intel i9)
Performance and Raw Power: AMD Ryzen and Intel i9 processors are designed for high‑performance desktop and workstation environments. They deliver impressive multi‑core performance and high clock speeds, which can be beneficial for running complex AI models, processing large datasets, or handling heavy simulation tasks. For example, recent comparisons of AMD’s Ryzen AI 9 series versus Intel’s i9‑based processors show that while both are exceptionally powerful, they are generally geared toward intensive computing workloads (see comparisons such as those on CPU‑monkey and
PCMag 2).
Power Consumption and Efficiency: However, these processors are also more power‑hungry and produce more heat compared to modern ARM‑based systems. This makes them less ideal for embedded or mobile robotics unless you have a design that can accommodate a higher power budget—for instance, if the robot is tethered or has a large battery and advanced cooling systems.
ARM‑BASED PROCESSORS (INCLUDING SMARTPHONE SoCs)
Efficiency and Integration: Recent ARM processors (including those found in high‑end smartphones) offer an excellent balance of performance, low power consumption, and integrated features such as Neural Processing Units
(NPUs) for AI workloads. Their energy efficiency and integrated sensor support make them highly attractive for robotics applications where battery life, heat dissipation, and real‑time responsiveness are critical.
Edge AI Capabilities: Many modern smart devices incorporate dedicated hardware accelerators
(NPUs or similar) that permit on‑device deep learning inference, which is a key feature for interactive companion robots. Consequently, an ARM‑based system is often the first choice for a mobile, self‑contained robot that needs to process video, audio, and sensor data in real time.
A HYBRID APPROACH
For a large hexapod or similar robot, there is considerable merit in using a hybrid system:
Real‑Time Control and Sensor Fusion: Use ARM‑based boards (like Raspberry Pi, smartphone
SoCs, or dedicated embedded processors) for real‑time control, sensor fusion, and low‑latency tasks. Their efficiency and integrated neural accelerators make them well‑suited for these applications.
Heavy‑Duty AI Processing: A high‑performance desktop‑class processor such as an AMD Ryzen or Intel i9 might be used in a docking station or “brain” module for more computationally demanding tasks such as training or updating complex AI models, processing large amounts of data offline, or running simulations. However, the majority of day‑to‑day operations in an autonomous robot would benefit from the low‑power ARM solutions.
2.
INTEGRATION WITH SMART ANDROID DEVICES
Advantages of AI‑Enabled Android Devices
Built‑In Sensors and Connectivity: Modern Android devices already integrate high‑resolution cameras, microphones, and inertial sensors along with powerful
SoCs. AI‑enabled smartphones can act as both control panels and computing units for robotic systems, potentially serving as a bridge between the robot and cloud services.
Portability and User Interface: Android devices have well‑developed operating systems and user interfaces. When integrated into a robotic system, they can provide natural human‑machine interaction, remote monitoring, or augmented reality overlays—all of which enrich the user experience.
On‑Device AI: With the continuous advancements in on‑device AI (enabled by dedicated NPUs and efficient frameworks like TensorFlow Lite or ONNX Runtime), smart Android devices can perform tasks such as speech recognition, object detection, and contextual inference, supporting conversational or adaptive behaviors in the robot.
Is It Worth Considering a Wait?
For many experimental prototypes or hobbyist projects, the current generation of ARM‑based platforms offers a robust starting point. However, if your application demands ultra‑high performance for certain learning or simulation tasks (and you can manage the higher power and cooling requirements), integrating an AMD Ryzen or Intel i9 module might be appropriate as a complementary unit. That said, for mobile robotics and interactive companion systems, energy efficiency and compact integration tend to favor ARM devices.
3.
COST ESTIMATES FOR A “HALF-SENSIBLY PERFORMING” SYSTEM
The cost of a robotic “brain” or overall computing unit can vary considerably depending on the performance and power efficiency required.
LOW-PERFORMANCE (BUDGET) ESTIMATE - COMPONENTS
Single‑board computers (e.g., Raspberry Pi 4): ~$50–$100
Budget Android smartphone repurposed as a control unit: ~$150–$250
Arduino for sensor control: ~$20–$40
Basic sensors (camera, ultrasonic, etc.): ~$50–$100
Total Range: A basic control system based on off‑the‑shelf low‑cost components might be assembled for roughly $300–$500. This setup would provide enough computing power for simple sensor fusion, basic navigation, and low‑latency conversational interfaces.
HIGH-PERFORMANCE (PREMIUM) ESTIMATE - COMPONENTS
Cutting‑edge ARM‑based SBC with enhanced AI capabilities (e.g., NVIDIA Jetson
Nano/AGX Xavier or high‑end Raspberry Pi variants): ~$200–$800
High‑end Android device or tablet dedicated to AI processing: ~$600–$1000
Advanced microcontrollers for precise motor control (Arduino advanced boards or similar): ~$50–$100
Comprehensive sensor suite (high‑resolution stereo cameras, LiDAR, precision inertial sensors): ~$300–$600
Total Range: A more capable system that can support complex AI models, advanced conversational interfaces, and robust sensor fusion might cost in the region of $1200–$2500 or more when assembled using top‑of‑the‑line off‑the‑shelf components.
These estimates focus on the computing and sensor subsystems alone—they would be in addition to mechanical parts, power management systems, and additional integration costs for a complete robotic hexapod.
4.
CONCLUSION
Processor Selection: AMD Ryzen and Intel i9 processors remain leaders in high‑performance computing but are generally less suited for embedded robotics applications due to their higher power consumption and thermal output. In contrast, ARM‑based solutions excel for mobile, power‑efficient tasks and are widely used in modern robotics.
Smart Android Integration: Smart Android devices, with their integrated sensors, user interfaces, and on‑device AI capabilities, can provide significant benefits as both computing units and interaction platforms in robotics projects.
Cost Considerations: For a system that delivers “half‑sensible” performance using off‑the‑shelf parts, budget setups can be achieved for roughly $300–$500, while high‑performance systems might range from $1200 to $2500 or more, depending on the sophistication required.
Ultimately, the best approach is likely to adopt a hybrid architecture—using ARM‑based systems for everyday embedded processing along with occasional reliance on higher‑performance x86 processors where the computational burden justifies their energy footprint.
SPECIFICATION VERSUS COST - LOW AND MEDIUM COST OPTIONS
Below is a detailed discussion comparing two approaches to building a high‑performance autonomous
AI companion robot versus a more cost‑effective, off‑the‑shelf, practical system. The discussion covers key hardware categories—processing units, sensor suites, mobility control, and integration strategies—to illustrate the differences in component selection and integration details for each approach.
1.
CENTRAL PROCESSING & AI INFERENCE - HIGH PERFORMANCE SYSTEM
Processing Unit:
Primary Choice: An NVIDIA Jetson AGX Xavier or similar high‑end embedded AI platform.
Rationale: These boards are purpose‑built for AI applications, featuring multi‑core CPUs, integrated
GPUs, and dedicated AI accelerators capable of real‑time deep learning inference for tasks such as advanced natural language processing, image recognition, and sensor fusion.
Integration Details:
Use CUDA‑enabled libraries for object detection and SLAM (Simultaneous Localization and Mapping).
Custom software stacks (for instance, leveraging TensorRT for hardware acceleration) are optimized to exploit the platform’s full potential.
Robust thermal management (heat sinks or active cooling) is necessary to maintain performance under sustained loads.
Secondary/Support Unit:
A performance‑oriented ARM‑based microcontroller (or a high‑end smartphone
SoC) may serve as an edge device for low‑latency control loops.
This could be integrated via high‑speed communication (e.g., SPI or I²C) with the main board.
PRACTICAL (COST-EFFECTIVE) SYSTEM - PROCESSING UNIT
Primary Choice: Raspberry Pi 4 or Raspberry Pi 400.
Rationale: Widely available with a broad support community and ample processing power for basic
AI tasks. Though less powerful as a standalone AI engine, the Raspberry Pi is well‑suited to coordinate specific tasks like basic sensor fusion, low‑resolution image processing, and speech synthesis.
Integration Details:
Use available Python libraries (TensorFlow Lite, OpenCV, or PyTorch Mobile) that run efficiently on a Pi.
Leverage additional USB accelerators such as the Intel Neural Compute Stick 2 if higher inference performance is needed without jumping to a high‑price point.
Secondary/Support Unit:
An Arduino board for real‑time control and interfacing with motor drivers and low‑latency actuators.
Communication is typically through serial or I²C, ensuring that sensor data and control signals are processed reliably without overloading the central board.
2. SENSOR SUITE & ENVIRONMENTAL PERCEPTION - HIGH PERFORMANCE
SYSTEM
Vision Sensors:
High‑resolution, stereo cameras or even a 3D depth camera (like Intel
RealSense) to enable detailed mapping and facial/object recognition.
Integration often requires calibration routines and high‑bandwidth data transfer (via USB3 or MIPI interfaces) to ensure timely processing.
Distance and Localization Sensors:
LiDAR or high‑precision ultrasonic sensors can provide accurate distance measurements and obstacle detection.
A combination of IMU (Inertial Measurement Unit) sensors and GPS (if outdoor use is required) can aid in SLAM algorithms.
Integration involves sensor fusion frameworks (often using ROS libraries) to combine data accurately from multiple sources.
Audio Sensors:
Multi‑array microphones with noise cancellation, integrated with advanced DSP (digital signal processing) modules for speaker localization and improved speech recognition.
These systems might use higher sample rate ADCs and proprietary software to maintain fidelity in noisy environments.
PRACTICAL (COST-EFFECTIVE) SYSTEM - VISION SENSORS
A Raspberry Pi Camera Module V2 or similar low‑cost camera.
While resolution and frame rates may be lower, these cameras are compatible with existing libraries and are easy to integrate using standard connectors.
Calibration and processing are more basic but are sufficient for simpler object detection tasks.
Distance and Localization Sensors:
Ultrasonic sensors (e.g., HC‑SR04 modules) or low‑cost IR sensors provide acceptable performance for rudimentary obstacle avoidance.
A basic IMU (e.g., MPU‑6050) can supply orientation data.
ROS implementations on the Raspberry Pi can still run SLAM algorithms, albeit with lower resolution maps and slower update rates.
Audio Sensors:
A single microphone or a compact microphone array designed for hobbyist robotics.
Basic speech recognition can be offloaded to cloud services if required, or use offline libraries that run on lower‑power processors.
3. MOBILITY & ACTUATION - HIGH PERFORMANCE SYSTEM
Chassis and Locomotion:
A robust hexapod with servo motors that offer high torque and precision control.
Motors equipped with encoders provide feedback for refined gait control and dynamic obstacle negotiation.
Integration uses dedicated motor controllers with advanced communication protocols (CAN bus or
EtherCAT) for real‑time performance.
Software for Motion Control:
Integrated with high‑performance boards, complex trajectory planning and inverse kinematics algorithms can drive smooth, adaptive locomotion.
Custom firmware can be designed to coordinate multi‑leg movements, using real‑time sensor input to adjust dynamically.
PRACTICAL (COST-EFFECTIVE) SYSTEM
Chassis and Locomotion:
A modular hexapod kit built on hobbyist-grade servo motors.
While less robust, these components can be managed using common Arduino motor shields.
Software libraries available in the Arduino ecosystem simplify the process of controlling gaits, even if the motion is less fluid and adaptive.
Software for Motion Control:
Open‑source inverse kinematics libraries and basic gait routines allow for acceptable performance within controlled environments.
Integration is simplified by using standard protocols and off‑the‑shelf hardware, reducing cost and complexity.
4. INTEGRATION STRATEGY & COMMUNICATION - HIGH
PERFORMANCE SYSTEM
Operating System and Middleware:
Likely to run a full‑featured Linux system (e.g., Ubuntu) with ROS (Robot Operating System) as middleware to coordinate communications between various modules.
This enables sophisticated perception, planning, and AI modules to interact with lower‑level motor controllers via well‑defined interfaces.
Networking and Cloud Integration:
High‑performance units typically incorporate high‑speed wireless (Wi‑Fi 6 or
LTE‑A) and Ethernet, which allow for off‑loading heavy computations or model updates to a remote server when necessary.
Secure APIs and protocols are employed to ensure that any cloud communication does not disrupt real‑time performance.
PRACTICAL (COST-EFFECTIVE) SYSTEM
Operating System and Middleware:
A lightweight Linux distribution on Raspberry Pi or a dual‑boot setup (Pi OS and a dedicated real‑time OS for motor control) ensures that tasks are partitioned between real‑time execution and high‑level logic.
Simpler middleware solutions (such as micro‑ROS or even custom Python scripts) can suffice for less complex integration needs.
Networking:
Standard Wi‑Fi modules provide connectivity for remote control and cloud services, but at a lower data rate and potentially higher latency.
This may require a design that tolerates occasional delays, and reliance on edge processing for critical control tasks ensures continued autonomy even when network connectivity is less reliable.
5.
COST & PRACTICAL CONSIDERATIONS - HIGH
PERFORMANCE SYSTEM COST RANGE
- Processing & AI Accelerators: ~$400–$1000
- Advanced Sensors (stereo cameras, LiDAR, etc.): ~$300–$800
- High‑quality Actuators & Motor Controllers: ~$500–$1000
- Integration and Custom Chassis: ~$800–$1500
Total (for core subsystems): Likely in the range of $2000–$4000, not including custom fabrication costs and additional development resources.
PRACTICAL (COST-EFFECTIVE) SYSTEM COST RANGE
-
Processing (Raspberry Pi + USB AI accelerator): ~$100–$200
- Basic Sensors (Pi Camera, ultrasonic, basic IMU): ~$50–$150
- Hobby‑grade Actuators & Motor Controllers: ~$100–$300
- Chassis & Integration: ~$150–$400
Total (for core subsystems): Likely in the range of $400–$1000, which is well within reach for hobbyists or initial prototypes.
COST COMPARISON CONCLUSION
High‑Performance Integration: For an AI friend or copilot that requires fluid, lifelike interaction and robust environmental awareness, the integration of high‑performance processors, advanced sensors, and precision actuators is essential. This approach, though cost‑intensive, allows for sophisticated deep learning inference and dynamic real‑time decision‑making.
Practical Integration: For cost‑conscious projects that still aim to deliver meaningful autonomous behavior and interaction, off‑the‑shelf components like the
Raspberry
Pi, Arduino, and budget sensors offer a solid foundation. While performance may be less robust in dynamic or complex environments, these components allow for rapid prototyping and accessible experimentation with AI
robotics.
Each approach has its own merits and applications. High‑performance systems are suited for research, advanced prototypes, or commercial products where smooth interaction and reliability are paramount. Practical systems, by contrast, serve as an excellent entry point for developers, hobbyists, or educational projects where innovation is encouraged on a lower budget.
LINKS
& REFERENCE
[1]
https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_ai_9_365-vs-intel_core_i9_13900
[2] https://www.pcmag.com/articles/i-benchmarked-ai-processors-from-amd-and-intel-and-the-results-are-underwhelming
[3] https://robotics24.net/blog/calculate-cost-of-robotic-cell/
[4] https://motioncontrolsrobotics.com/resources/tech-talk-articles/range-robot-cost/
[1]
https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_ai_9_365-vs-intel_core_i9_13900
[2] https://www.pcmag.com/articles/i-benchmarked-ai-processors-from-amd-and-intel-and-the-results-are-underwhelming
[3] https://robotics24.net/blog/calculate-cost-of-robotic-cell/
[4] https://motioncontrolsrobotics.com/resources/tech-talk-articles/range-robot-cost/

|