- How to Adjust X and Y Axis Scale in Arduino Serial Plotter (No Extra Software Needed)Posted 2 months ago
- Elettronici Entusiasti: Inspiring Makers at Maker Faire Rome 2024Posted 2 months ago
- makeITcircular 2024 content launched – Part of Maker Faire Rome 2024Posted 5 months ago
- Application For Maker Faire Rome 2024: Deadline June 20thPosted 6 months ago
- Building a 3D Digital Clock with ArduinoPosted 11 months ago
- Creating a controller for Minecraft with realistic body movements using ArduinoPosted 12 months ago
- Snowflake with ArduinoPosted 12 months ago
- Holographic Christmas TreePosted 12 months ago
- Segstick: Build Your Own Self-Balancing Vehicle in Just 2 Days with ArduinoPosted 1 year ago
- ZSWatch: An Open-Source Smartwatch Project Based on the Zephyr Operating SystemPosted 1 year ago
Etch-A-Snap: The Raspberry Pi Powered Etch-A-Sketch Camera
Etch-A-Snap is the result of mashing together a Pocket Etch-A-Sketch, a Raspberry Pi Zero, and onboard camera module, a couple servo motors and a number of other goodies to create what is ‘probably’ the world’s first Etch A Sketch camera.
The Etch-A-Snap is programmed first to reduce the image to 240 by 144 pixels since the tiny drawing toy at the back doesn’t need all that data. Of course, the Etch A Sketch doesn’t need color either, so the image is further downgraded to black and white.
The low-res image is processed and converted to plotter commands, which is a type of printer that works similarly to an Etch A Sketch; physically drawing out an image by moving a pen along X and Y coordinates. But in this case, the commands are sent to the stepper motors which spin the Etch A Sketch’s upgraded knobs to move its drawing tip accordingly.
Etch-A-Snap is completely portable, but it’s probably the last thing you’d want to bring along to capture vacation memories.
As the maker Fitzpatrick says, the image takes between 15 minutes and an hour to “develop” the image, depending on just how many elaborate lines are needed to recreate the photograph. Much of that time is spent computing to translate the image, and the camera tends to perform the slowest right after the image is snapped.
While the practical applications are limited, the cool factor is that the system can also be used to draw manually processed images, which can improve results when images are chosen carefully.