Bringing computer vision and AI to the Internet of Things (IoT) and edge device prototypes is easy with the enhanced capabilities of the Intel NCS 2. For developers working on a smart camera, a drone, an industrial robot or the next must-have smart home device, the Intel NCS 2 offers what’s needed to prototype faster and smarter.
What looks like a standard USB thumb drive hides much more inside. The Intel NCS 2 is powered by the latest generation of Intel VPU – the Intel Movidius Myriad X VPU. This is the first to feature a neural compute engine – a dedicated hardware neural network inference accelerator delivering additional performance. Combined with the Intel Distribution of the OpenVINO toolkit supporting more networks, the Intel NCS 2 offers developers greater prototyping flexibility. Additionally, thanks to the Intel AI: In the Production ecosystem, developers can now port their Intel NCS 2 prototypes to other form factors and productize their designs.
How It Works:
With a laptop and the Intel NCS 2, developers can have their AI and computer vision applications up and running in minutes. The Intel NCS 2 runs on a standard USB 3.0 port and requires no additional hardware, enabling users to seamlessly convert and then deploy PC-trained models to a wide range of devices natively and without internet or cloud connectivity.
The first-generation Intel NCS, launched in July 2017, has fueled a community of tens of thousands of developers, has been featured in more than 700 developer videos and has been utilized in dozens of research papers. Now with greater performance in the NCS 2, Intel is empowering the AI community to create even more ambitious applications.
- Reduce time to prototype or tune neural networks with versatile hardware processing capabilities at a low cost.
- Enhanced hardware processing capabilities vs. the original Intel Movidius Neural Compute Stick
- Take advantage of 16 cores instead of 12 plus a neural to compute engine, a dedicated d deep neural network accelerator
- Up to 8X performance gain on deep neural network inference, depending on network
- Affordability accelerate deep neural network applications
- Transform the AI development kit experience
- Plug and Play Simplicity
- Affordable price point
- Supports common frameworks and includes out-of-box and fast development
- Exceptional performance per watt takes machine vision to new places
- Run “at the edge” without reliance on a cloud computing connection
- Deep learning prototyping is now available on a laptop, a single board computer or any platform with a USB port
- Accessible and affordable — take advantage of more performance per watt and highly efficient fanless design
- Combine the hardware-optimized performance of the Intel® Movidius™ Myriad™ X VPU and the Intel® Distribution of OpenVINO” Toolkit to accelerate deep neural network-based applications
- First in its class to feature the Neural Compute Engine — a dedicated hardware accelerator
- 16 powerful processing cores, called SHAVE cores, and an ultrahigh-throughput intelligent memory fabric together make the Intel Movidius Myriad X VPU the industry leader for on-device deep neural networks and computer vision applications
- Featuring an entirely new deep neural network (DNN) inferencing engine on the chip
Simpler versatility for prototyping :
- Intel Distribution of OpenVINO toolkit streamlines the development experience
- Prototype on the Intel Neural Compute Stick 2 and then deploy your deep neural network onto an Intel Movidius Myriad X VPU-based embedded device
- Streamline the path to a working prototype
- Extend workloads across Intel hardware and maximize performance
- The robust, Intel Distribution of OpenVINO toolkit enables simpler porting and deployment of applications and solutions that emulate human vision
- The Intel Distribution of OpenVINO toolkit streamlines development of multiplatform computer vision solutions — increasing deep learning performance
- It’s now easier to develop applications for heterogeneous execution across the suite of Intel acceleration technologies. Develop once and deploy across Intel CPU, VPU, Integrated Graphics or FPGA.
- If desired, users can implement their own custom layers and execute those on the CPU while the rest of the model runs on the VPU