Alright folks, gather ’round! Let me tell you about this crazy thing I did called “sketch age streamer.” It all started with me wanting to mess around with AI and see if I could make something kinda fun.
First off, I spent ages trying to find the right tools. I knew I wanted something that could take a live video feed and then apply some kind of filter to make it look like a drawing or a painting. Sounded simple enough, right? Wrong! I ended up digging through tons of different libraries and frameworks. I played around with OpenCV for video processing, and then I looked at a bunch of different machine learning models for style transfer.
The initial setup was a pain. Getting the camera to feed into the code without lagging was the first hurdle. I remember spending a solid afternoon just tweaking camera settings and trying different capture methods. Eventually, I got something that was… acceptable. Not great, but at least it wasn’t a slideshow.
Next up, I had to figure out the style transfer part. I messed around with a few pre-trained models, but they were all either too slow or the results looked kinda muddy. So, naturally, I decided to try and train my own model. I grabbed a bunch of sketches and drawings from online and then spent hours feeding them into the machine learning algorithm. I used a neural network based on some research papers I’d found. Honestly, most of the training process was just me staring at a progress bar and hoping for the best.
When the model was finally trained, I plugged it into my video stream. And… it kinda worked! The output was super janky at first. It looked like a shaky, distorted version of me with weird lines all over the place. But after some tweaking of the parameters and a lot of trial and error, I managed to get something that looked like a semi-coherent sketch. Not exactly Picasso, but hey, it was progress!
Now for the “streamer” part. I wanted to be able to broadcast this live so people could see it. I set up OBS (Open Broadcaster Software) to capture the output of my program and then stream it to Twitch. Getting OBS configured properly was another whole adventure, let me tell you. There were settings for bitrate, resolution, audio input… I felt like I was back in college doing a sound engineering course.

Finally, everything was set up. I hit the “Go Live” button, and… nothing happened. Just kidding! People actually started tuning in. I was genuinely surprised. I spent the next few hours just talking to the chat and messing around with the filters. It was pretty hilarious watching myself turn into different kinds of sketches in real time.
Here’s a quick rundown of what I used:
- Python (because everyone uses Python, right?)
- OpenCV (for video capture and processing)
- TensorFlow/Keras (for the machine learning model)
- OBS Studio (for streaming)
The challenges I faced:
- Getting the camera feed to work smoothly
- Training a decent style transfer model
- Optimizing the model for real-time performance
- Figuring out OBS settings (so many settings!)
The lessons I learned:
- AI is cool, but it takes a lot of work
- Streaming can be surprisingly fun
- People on the internet will watch anything
In the end, “sketch age streamer” was a fun little project that let me play around with some cool tech. It was a bit janky, and the results weren’t always perfect, but I learned a lot in the process. Plus, it was kinda cool to see myself turned into a living, breathing sketch. Who knows, maybe I’ll turn it into a full-time gig! Nah, just kidding… unless?