Arduino Camera with Code (25+ New Arduino Hacks 2019)

Arduino Camera-Arduino Code 2018

Arduino Camera Module Code 2019

In this blog, we are going to build a security camera that automatically uploads pictures to the Web. We will connect a camera to the Arduino Yún board, and use its powerful features to control this camera easily and upload pictures to the Web.

 

What we are going to build is a system that can detect motion, and if some motion is detected, can automatically take a picture and save it both on the local SD card attached to the Yún board and to a cloud storage; in our case, Dropbox. We are also going to make the camera stream a video live on a private YouTube feed.

 

Getting started

Ardunio_camera

Let's see what we are going to do in this project in more detail:

First, we are going to build the hardware part of the project with a typical USB camera, a PIR motion sensor, and one SD card and the Arduino Camera Code.

 

Then, we will write some code to test all the hardware connections of the project. We'll check whether the motion sensor is working correctly and try to take a picture with the camera while it is connected to the Arduino Yún board.

 

After testing the hardware, we are going to build the first application, which captures pictures whenever some motion is detected and automatically stores these pictures on the SD card.

 

Right after building this simple local application, we are going to connect the project to the cloud. The project will do the same as in the earlier case, take pictures when some motion is detected.

 

But this time the pictures will also be uploaded to your Dropbox folder. This way, the pictures can be seen in real time from anywhere, as you can log in to Dropbox from any web browser.

 

Finally, we are going to stream a video to the Web, so you can always check what's going on in your home from a mobile phone or tablet, wherever you are in the world. For this application, we are going to install a streaming library on the Yún board and make it continuously stream a video over Wi-Fi.

 

This stream will be acquired by your computer and sent to YouTube via a dedicated software. On YouTube, we will then be able to access this live stream just as you would watch a typical YouTube video.

 

The required hardware and software components

anduino

First, let's see which components we need for the project. Apart from the Yún board, you will need three components: a USB camera, a PIR motion sensor, and an SD card. We will only make direct connections to Yún in this part, so you won't need a breadboard to make electrical connections.

 

The most important component of this project is the USB camera. We are using a standard USB webcam from Logitech, the C700 model, which can record pictures up to the HD resolution. Of course, you can use other cameras if you already have one on your desk.

 

Make sure that the camera is compatible with USB Video Class (UVC). Most of the recent webcams are compatible with this protocol. It might work with a camera that is not officially compatible with UVC, but there is no guarantee.

 

You can find a list of all UVC compatible cameras at http://en.wikipedia.org/wiki/List_of_USB_video_class_devices.

 

Also, try to choose a camera with at least HD resolution, so you can get nice and clear pictures. It's not so important for the streaming part, but can be great if you want to use this project for other applications than security, for example, to create time-lapse videos. The following is an image of the USB camera we are using, the C700 USB webcam from Logitech:

 

Then, there is the PIR motion sensor. This sensor is a really inexpensive sensor that uses infrared pictures to detect motion in a room from anything that emits heat, such as humans. We could have used the camera directly to detect motion, but that would have not been so efficient.

 

The camera uses quite a lot of power when it is on, whereas a PIR motion sensor uses nearly no power. It would also have been more difficult to write the software required to detect motion efficiently from the camera recording. We used a PIR motion sensor from Parallax.

 

Again, you can use other brands of PIR sensors. The main thing to consider is that it should work with 5V voltage levels because that is the voltage level used by the Yún.

 

Most sensors work with both 3.3V and 5V voltage levels, so you shouldn't have many problems with this characteristic. When motion is detected, it should simply put a logical high level on its signal pin.

ardunio_camera

For the SD card, we used a standard micro SD card. Usually, you will have one already in your digital camera or smartphone. You will need to format it correctly so that the Yún can use it. We recommend that you use the official SD card formatter from the SD card association, see https://www.sdcard.org/downloads/ formatter_4/.

 

Now, on the software side, you will need a bit more than just the Arduino IDE. We are going to install the required software for the camera directly on the Yún board when we connect to it via SSH, but you will need the Temboo Python SDK to upload pictures on to Dropbox. You can find the SDK at https://www.temboo.com/ download.

 

Then, you also need to have a Dropbox account, so you can upload pictures on to it. You can simply create an account by going to https://www.dropbox.com/home.

 

Once your account is created, you need to create an app that will be used by your project. This basically means that you have to authorize the project you are going to build in this blog to automatically send pictures to your Dropbox account without having to enter your login and password every time.

 

You will also be given all the required information (such as an API key) that we will enter later in the Python script on Yún.

 

Perform the following steps to create an app:

1.  To create an app, first, go to https://www.dropbox.com/developers/apps.

 

2. Then, click on the Create app in the top-right corner of the window. You can now choose the type of app you want to create. In our case, we want to use the Dropbox API directly.

 

3. You will then be prompted to choose the kind of data your app needs to store. We want to upload pictures, so choose Files and datastores.

4.  You can then finish the process of creating your Dropbox app.

 

5.  On the confirmation page that describes the app, you will need to write down the App key and App secret, which we will need for the rest of the project.

 

6. Also, make sure that the Permission type field is set to App folder. This will ensure that the pictures are uploaded to the folder dedicated to the app and that the Yún won't have access to the rest of your Dropbox folder.

 

7. What you need to get now is the Token key and Token secret relative to your Dropbox app, so you can enter them later in the software of our project.

 

To get them, the first step is to go to the InitialiseOAuth Choreo on the Temboo website at https://temboo.com/library/Library/Dropbox/ OAuth/InitializeOAuth/. Here, you will need to enter the App key and App secret.

 

This will generate some additional information such as a callback ID and a temporary token secret. You'll also be asked to visit a link to Dropbox to confirm the authentication.

 

8.  Finally, go to the FinalizeOAuth page to finish the process. You'll be asked to enter your App key, App secret, callback ID, and temporary token secret at https://temboo.com/library/Library/Dropbox/OAuth/ FinalizeOAuth/. 

 

After this step, you'll be given your final Token key and Token secret. Write them down as you'll need them later.

 

Making hardware connections

hardware

It's now time to assemble our project. As we are going to use most of the Yún's connectivity, such as the USB port, it will be quite easy and quick to assemble the project. First, simply put the formatted micro SD card into the SD card reader of the Yún, which is located below the Yún board, as shown in the following image:

 

Finally, you need to connect the PIR motion sensor to the Yún board. It basically has three pins: VCC, GND, and SIG (signal pin). Connect VCC to the Yún's 5V pin, GND to the Yún ground, and SIG to pin number 8 of the Yún. You should end up with a setup similar to the following image:

 

Finally, you can connect the Yún to your computer via a micro USB cable or power it with a USB adapter if you want to use the project remotely and upload the Arduino sketches via Wi-Fi.

 

Testing your hardware connections

test_hardware

Now that all the connections are made, we can test the project. To get started, we will take care of the motion sensor. For this, we will write a very simple sketch that will only make use of the embedded Atmel microcontroller on the Yún board. We first need to declare the pin that the sensor is connected to, as follows: const int sensor_pin = 8;

 

Then, in the setup() function, we will start the Serial connection, as follows: Serial.begin(9600); delay(1000);

 

We can also set some delay before data is read from the sensor, as it needs some time to initialize and work correctly. In the loop() function, we continuously read the value from pin number 8.

 

Remember that the sensor will simply return a logical high state if some motion is detected and a low state otherwise. This means that we can store the sensor's reading into a Boolean variable, as shown in the following line of code: boolean sensor_value = digitalRead(sensor_pin);

 

Every second, this value is then printed on the Serial monitor using the following lines of code: Serial.println(sensor_value); delay(100); 

The complete code for this part can be found at https://github.com/ openhomeautomation/geeky-projects-yun/tree/master/blog3/pir_test.

 

You can now upload the preceding code on to your Yún board. Open the Serial monitor and try to pass your hand in front of the sensor; you should see the value change instantly on the Serial monitor.

 

If you can see the values change instantly as you pass your hand in front of the sensor, this means that the Yún is wired correctly. You will also notice that the sensor turns red when it detects motion.

 

Now we are going to test the USB camera. We can actually test the camera without writing an Arduino sketch. What we are going to do instead is connected directly to the Yún board via SSH.

 

Indeed, the camera is interfaced directly to the Linux machine of the Yún via the USB port, so the Arduino sketch will later have to use the Bridge library in order to access the camera.

 

For now, just go to a terminal window (the typical terminal that comes installed with OS X or Linux, or install one such as HyperTerminal if you are under Windows), and type the following command: ssh root@yourYunName.local

 

Of course, you will have to put the name you gave to your own Yún in place of your unnamed. For example, the name of my Yún is myarduinoyun; therefore, I need to type myarduinoyun.local. This will establish a direct connection with the Linux machine of the Yún.

 

You will then be prompted to enter the password that you chose for your Yún. If it works, you should see the following screenshot being displayed on your terminal, which indicates that you are now working directly on the Yún:

 

You can access all the functions from your Yún Linux machine. We are now going to install the required software for the camera. This requires the Arduino Yún to be connected to the Internet so that it can get the required packages, as described in the following steps:

package manager,

1.   The process starts by updating the package manager, opkg, as follows:opkg update

2.  Install the UVC drivers, as follows:opkg install kmod-video-uvc

3. Install the python-openssl package that we will use later in the project, as shown in the following command:opkg install python-openssl

 

4.  Finally, you can install the fswebcam software that we will use to take pictures, as shown in the following command:opkg install fswebcam

 

5.  Once this part is done and the software is installed on the Yún, we can test the camera and take a picture. To also test whether the SD card is working at the same time, go over to the SD card folder, which is usually called sda1, using the following command:cd /mnt/sda1

 

6. You can now take a picture by typing the following command:fswebcam test.png

 

7.  You should see some message being printed that starts with the following:--- Opening /dev/video0... Trying source module v4l2.../dev/video0 opened.

Some errors might be printed as well, but this doesn't matter for the process of taking a picture. To check whether this works correctly, you can first check whether there is a file named test.png located on the SD card. To do this, you can simply type the following command: ls

 

The preceding command will print the list of all the files in the current folder; in the present case, the SD card. You should see at least a file named test.png.

 

Now, to check that the picture is fine and not corrupted, you can, for example, remove the SD card from the Yún (by unmounting it first using the unmount/dev/ sda1 command), and plug it directly to your computer using a micro SD card to normal SD card adapter.

 

You should see the following screenshot in your browser (we already added the files that are required for the next sections of the project at this point, which explains the other files located on the SD card):

 

If you see a picture on your SD card at this point, open it to check that it was correctly taken. If that's the case, congratulations! Everything is now set up for you to write exciting applications with this project. If you can't see a picture at this point, the first step is to repeat the whole process again.

 

Be careful to actually unmount the SD card after the picture is taken. You can also plug the camera directly to your computer to check whether the problem comes from the camera itself.

 

Recording pictures when motion is detected

motion

The first application we are going to build with the hardware that we just step up will be only local, so nothing will be sent to the Web yet. In this section, we just want to build a camera that will be triggered by the motion sensor. 

 

With this, you can, for example, check whether somebody entered your home while you were not there because the PIR motion sensor would instantly notice it.

 

This section is really the foundation of the whole project. We are going to reuse the code developed in this section later when we write the piece of code to upload pictures to Dropbox.

 

For this part of the project, we don't want to use the SSH access to take pictures anymore; we need to trigger the camera right from the Arduino sketch. For this, we are going to use the Bridge library and the Process library to call a command on the Linux machine, just as if you were typing it on a terminal window.

 

The sketch starts by declaring the libraries that we need to use:

  • #include <Bridge.h>
  • #include <Process.h>

To call some commands on the Yún's Linux machine, we will need to declare a process, which is an object that we will call to emulate some terminal entries: Process picture;

 

We'll also build a filename for each picture that will be taken, as shown in the following line of code. Indeed, we named the file test.png earlier, but in this application, we want every picture taken by the project to have a different name: String filename;

Declare the pin on which the motion sensor is connected, as follows: int pir_pin = 8;

 

We also need to define where the pictures will be stored. Remember, we want to store them all on the SD card, as follows: String path = "/mnt/sda1/";

 

You can also store pictures locally on the Yún, but it would quickly saturate the memory of the Arduino Yún. Then, in the setup() function, we start the bridge between the Atmel microcontroller and the Linux machine of the Yún, as follows: Bridge.begin();

 

Also, we set the pin of the PIR motion sensor as an input, as follows: pinMode(pir_pin,INPUT); In the loop() function, what we want to do is to continuously read data from the motion sensor and trigger the camera if any motion is detected. This is done by a simple if statement that checks the sensor's value, as follows: if (digitalRead(pir_pin) == true)

 

Then, if some motion is detected, we need to prepare everything to take the picture. The first step is to build a filename that will contain the date on which the picture was taken. To do so, we are using the Linux date command that outputs the current date and time.

 

This is important because we want to know what time the picture was taken at and give a unique filename to every picture. At the end, we also want to specify that this picture will be taken in a PNG format. The filename formatting part is done by the following code:

filename = "";
picture.runShellCommand("date +%s");
while(picture.running());
while (picture.available()>0) {
char c = picture.read();
filename += c;
}
filename.trim();
filename += ".png";

 

Finally, we can take the picture. What we are going to do here is to call the fswebcam command again using the runShellCommand function of our picture process that will emulate a terminal entry.

 

We also want to use the maximum resolution available on the camera. In the case of the camera we chose, it was 1280 x 720 (standard HD resolution). We have quite a lot of space available on the SD card (4 GB with the one I used), so you can use the maximum resolution without running into problems.

 

We recommend that you use a dedicated SD card for this project so that you don't run into problems with other files that could be stored on the card.

 

For the sake of simplicity, we won't add an automated check to see whether the card is full, but this is something you should consider if you want to let the project run continuously over time. You can specify the resolution using the –o command at the end of the call.

Finally, we can build the complete code to take a picture: picture.runShellCommand("fswebcam " + path + filename + " -r 1280x720");

while(picture.running());

 

Note that we are also using a while() statement here to make sure that the webcam utility has enough time to take the picture. The complete code can be found at https://github.com/openhomeautomation/geeky-projects-yun/tree/ master/blog3/triggered_camera.

 

You can now upload the code to the Yún board and test the project. Once it's uploaded, try moving your hand in front of the sensor. The Arduino Yún should trigger the camera to take a picture and save it to the SD card.

 

To make sure that a picture was taken at this point, you can simply check on the camera itself. For example, the Logitech webcam that I used has a small LED that turns green whenever it is active.

 

After a while, remove the SD card from the Arduino Yún (as earlier, unmount the SD card from the Yún first), and put it in your computer with the adapter we used earlier. You should see all the pictures that were taken at the root of the SD card.

Again, check these pictures to make sure that they are not corrupted and that everything worked as planned.

 

Sending pictures to Dropbox at regular intervals

Sending pictures

We are now going to extend the code we built in the previous section and write some new code that automatically uploads the pictures that were taken by the camera to Dropbox. For this, we will need to build a slightly more complex software than in the previous part.

 

For now, we only used the Choreos (Temboo libraries) for the Arduino Yún. However, there are actually many other Choreos available for other languages, such as for Python. This is great news because the Linux machine of the Yún is capable of running Python code out of the box.

 

It's actually much easier to access the Dropbox API from Python, so that's what we are going to use in this part. We will build a Python script that uploads the pictures we took to Dropbox, and call this script from the Arduino sketch using the Bridge library and our picture processes.

 

I will now explain the content of the Python script. Later, we will save all these lines of code in a separate file, and put it on the SD card along with the Temboo Python SDK.

 

The Python script starts with the following lines of code:

from temboo.core.session import TembooSession from temboo.Library.Dropbox.FilesAndMetadata import UploadFile

 

The Python script will also take an argument: the name of the file to be uploaded. This way, we can directly pass the name of file (built by the Arduino code with a timestamp) to the Python script. The following lines of code do exactly this:

with open(str(sys.argv[1]), "rb") as image_file:

encoded_string = base64.b64encode(image_file.read())

Inside the script, you need to define your Temboo credentials, as follows:

session = TembooSession('yourTembooName', 'yourTembooApp', 'yourTemblogey')

 

These are exactly the same credentials we used for Temboo earlier. We then need to declare the upload file Choreo for Python that will be used to automatically upload pictures to Dropbox, as follows:

uploadFileChoreo = UploadFile(session)

uploadFileInputs = uploadFileChoreo.new_input_set()

 

The next step is to set the different inputs, which you had done when you created your Dropbox app, as follows:

uploadFileInputs.set_AppSecret("appSecret")
uploadFileInputs.set_AccessToken("accessToken")
uploadFileInputs.set_FileName(str(sys.argv[1]))
uploadFileInputs.set_AccessTokenSecret("accessTokenSecret")
uploadFileInputs.set_AppKey("appKey")
uploadFileInputs.set_FileContents(encoded_string)
uploadFileInputs.set_Root("sandbox")

 

Finally, we can order uploadFileChoreo to upload the file to your Dropbox folder in the corresponding folder of your app, as follows:

uploadFileResults =uploadFileChoreo.execute_with_results(uploadFileInputs)

 

You can now save this code in a file named upload_picture.py and put it at the root of the SD card. Remember the Temboo Python library we downloaded earlier? It's time to unpack it and place it at the root of the SD card as well.

 

Just make sure that it appears with the name temboo in the root of the SD card, so the Python file we just created can access it correctly. If no pictures have been recorded yet, the following screenshot shows what your SD card folder should look like:

 

We also need to slightly modify the Arduino sketch to upload pictures on Dropbox. We used exactly the same code base as in the previous section, so we will only detail the new code that was added.

 

In the part that is executed when motion is detected, right at the end of the loop, you need to use the picture process again to execute the Python script, as shown in the following code:

picture.runShellCommand("python " + path + "upload_picture.py " + path + filename);
while(picture.running());

 

Note that we are passing along the same filename and path as the pictures that are recorded on the SD card, so the exact same picture name is recorded locally and sent to Dropbox.

 

You can now put the SD card back into the Ardu ino Yún, upload the updated Arduino sketch, and head to your Dropbox folder. At this point, you should note that a new folder was created in your Apps folder with the name of your Dropbox app that you set on the Dropbox website, as shown in the following screenshot:

 

Now, if the motion is detected, the sketch should not only log the pictures on the SD card, but also on your Dropbox folder. If everything is working correctly, you can see that pictures arrive in real time inside your Dropbox folder as the Yún takes the pictures using the USB camera.

 

The cool aspect about these applications of our project is that this can be done from anywhere in the world. You can do this from your computer, of course, but also from a web browser.

 

Many mobile devices can also run the mobile version of Dropbox, so you can see if somebody has entered your home right from your mobile device. On my computer, for example, Dropbox also sends me a notification that a new file was uploaded, so I can instantly see whether something is happening in my house and can act accordingly.

 

Live video streaming via Wi-Fi

video streaming

To finish this blog, we are going to see another cool application that we can make with the Arduino Yún and our USB camera. Remember that the camera is actually a standard webcam and that it is also made to capture videos.

 

Wouldn't it be cool to automatically stream video on a private video channel on the Web, so you can watch over your home in real time from anywhere just by going into a web browser? That's exactly what we are going to do in this section.

 

Many commercial IP cameras are actually doing this with proprietary solutions, but I wanted to use commonly available tools; this is why we chose the YouTube live event service to stream the video that can then be accessed from any device.

To make the application work, we first need to install some additional software packages on the Yún, as shown in the following steps:

 

1. Connect to the Yún again using SSH with your Arduino Yún name and password, and type the following command to get the correct package for live streaming: wget http://www.custommobileapps.com.au/downloads/mjpg-streamer.ipk

 

2. Note that if the link is not valid anymore and you can't find the files, this package is also available inside the code of this blog. You can now install it with the following command: opkg install mjpg-streamer.ipk

 

3. You can now start the live streaming software on your Arduino Yún using the following command: mjpg_streamer -i "input_uvc.so -d /dev/video0 -r 640x480 -f 25" -o "output_http.so -p 8080 -w /www/webcam" &

 

Here, the parameter after –h is the resolution and the one after –i is the port on which the stream will be available. We also specified the number of frames per second using the –I command. The other options are less important and you do not have to worry about them.

 

Note that we didn't stream at HD resolution; it was apparently too much for the Arduino Yún, and the video stream suffered significant lag and also had corrupted images, which is not what we want at all.

 

You can then access your stream by going to your Arduino Yún's address in your web browser followed by 8080 to specify the correct port. For example, in my case, it was http://myarduinoyun.local:8080/ stream.html.

 

This actually gives you direct access to the live stream. You should then see the stream interface with the live stream in the middle of the page. You can also use the different elements of the menu on the left to explore other possibilities of this streaming software. For example, you can get a link for VideoLAN, so you can access your stream right from the VLC player.

 

Now, this is already great and you could stop here to access your video stream from your local Wi-Fi network. But it would be even better if the stream was available online, so you could access it from anywhere in the world even without being connected to your local Wi-Fi network. The first step is to go to your YouTube account in VIDEO MANAGER and to the

 

From this menu, you can create your stream just like you would create a new YouTube video. Make sure that you put the video as unlisted unless you want other people on YouTube to be able to see what's going on in your home.

 

Compared to a private video, you will still be able to share the video with the people you know just by giving them the URL of the stream. Then, on the next page, YouTube will ask you to choose which encoder you want to use.

 

I chose Wirecast from the list and downloaded it from their website. In the Wirecast interface, you need to set the correct video source (by default, it will stream from your computer's webcam).

 

In the menu where you can select the video source, select Web Stream Source and configure it, as shown in the following screenshot:

 

Basically, you need to choose HTTP as the protocol, use Motion JPEG as the format, and put the URL from the VideoLAN tab of the streaming interface. For example, for my project, it was myarduinoyun.local:8080/?action=stream.

 

After a moment, if everything is working fine, you should see the live stream from your USB camera appear directly in the main window of Wirecast. Don't worry if there is some lag at this point; it is only a delay usually; in my case, I had about 1-2 seconds of delay in the Wirecast software.

 

Also, make sure that this stream is the only one that will be sent to YouTube. For this purpose, delete all the other streams from the Wirecast interface. Indeed, by default, Wirecast puts the stream that comes from your webcam on the interface.

 

Also, note that using an HTTP stream is a feature from the paid version of Wirecast; it works perfectly in the free version, but you will get a watermark displayed on the video from time to time. Don't worry; it's actually not a problem to monitor what is going on in your house.

 

The next step is to actually stream data to YouTube. Click on the Stream button at the top of the interface, which should turn red, after which you will be prompted to enter your YouTube credentials. It should then automatically detect the live event video that you just created on YouTube.

 

Accept the settings, make sure it is streaming from Wirecast, and go back to the YouTube interface. You can now go to the video manager and go to the Live Control Room tab.

 

This is where you should see that YouTube is actually receiving some data from your Arduino Yún via Wirecast running on your computer. It should indicate that the Stream Status is GOOD, as shown in the following screenshot:

 

If this is not the case, please go back to the Wirecast application to check that the streaming process is working correctly. At this moment, don't worry; your stream is not working just yet. You should see that the Preview button, as shown in the following screenshot, is now available and can be clicked. Just click on it.

 

YouTube will then prepare your stream, and you will have to wait for a moment (around 30 seconds when I tried it): After a while, the page will be updated automatically so that you can move to the next step and actually start the streaming.

 

Note that before making the stream live, you can preview it using the options on the preceding page. If what you see is satisfactory, you can now click on Start Streaming to finally finish the process. You will then have access to the stream on this page or directly on the dedicated page of the stream.

 

You can see from the red dot below the video that the video is streaming live. Because the video is marked as Unlisted, only people with the URL can access it.

 

You can, for example, mark the video as a favorite in your YouTube account and then access it from anywhere. You can also share it with your family and friends, so they can also watch the stream from their browsers.

 

Note that because we are using the Wirecast software on our computer to encode the stream for YouTube, we need to have our computer on for this to work.

 

At the time this blog was written, no software was available to directly stream the video to YouTube without the help of a computer, but this might change in the future, removing the need for a computer to stream the video.

 

Building a Cloud Spy Camera

spy_camera

We are now going to build a very famous project for secret agents: a spy camera. This will be a camera that can be, for example, installed in a room behind a set of blogs and help you in monitoring the room from another location.

We are going to build two projects based on the same hardware. These will be the key topics covered in this blog:

 

The first project will be a spy camera that takes a picture every time motion is detected in front of it and uploads it to your Dropbox account. The pictures will then be accessible to the spy from anywhere in the world.

 

Finally, we'll end the blog by making the camera stream live videos on a local Wi-Fi network. This will be perfect for a spy who wants to see what's happening in a room while being hidden in another room or outside. Let's dive in!

 

Hardware and software requirements

Hardware

First, let's see what are the required components for this project.

 

For once, we are not going to use an Arduino Uno board, but an Arduino Yun. Not only do we need Wi-Fi connectivity but also the onboard USB port of the Yun. This will make it really easy to use a USB camera with our project. The following is the Arduino board that I used for this project:

 

Then, you need a USB camera. You need a camera that is compatible with the USB Video Class (UVC). Basically, most recent USB cameras are compatible with this standard. I recommend the Logitech C270 USB camera that I used for this project:

 

Finally, you will also need a PIR motion sensor, to detect whether there is motion in front of the camera. Any brand will be fine, you just need it be 5V-level compatible. This is the sensor that I used for this project:

 

Finally, here is the list of all the components that we will use in this project:

  • Arduino Yun (http://www.adafruit.com/product/1498)
  • USB camera (http://www.logitech.com/en-us/product/hd-webcam-c270)
  • PIR motion sensor (http://www.adafruit.com/product/189)
  • A microSD card (at least 2 GB)
  • Breadboard (https://www.sparkfun.com/products/12002)
  • Jumper wires (https://www.sparkfun.com/products/8431)

 

On the software side, you will only need the Arduino IDE. If you are using Windows, you'll also need a terminal software. I recommend using PuTTY, which you can download from http://www.putty.org/.

 

Hardware configuration

Hardware configuration

Now let's configure the hardware for this project. It will be really simple and we'll also set up the Wi-Fi connectivity of Yun. This is a schematic to help you out (excluding the USB camera):

 

You just need to connect the PIR motion sensor to Yun. Connect VCC to the Arduino 5V pin, GND to GND, and the output of the sensor to pin number 8 of the Arduino Yun. Finally, insert the USB camera into the USB port and the microSD card into the Arduino Yun. The following is how it should look in the end:

 

Now, we are going to set up Yun so that it connects to your Wi-Fi network. For that, the best way is to follow the latest instructions from Arduino that are available at https://www.arduino.cc/en/Guide/ArduinoYun.

 

Then, you should be able to connect to your Yun via your favorite web browser and access it with the password that you set earlier: After that, you'll be able to see that your Yun is working:

 

Now, we'll access it from a terminal in order to install some modules. If you are on Windows, I recommend using PuTTY to type these commands. Start by connecting your Yun (replace the address with the one of your Yun) with: ssh root@arduinoyun.local

  • Now, type the following command: opkg update
  • When this is done, type: opkg install kmod-video-uvc
  • Then enter: opkg install fswebcam
  • Finally, type the following command: opkg install mjpg-streamer

Congratulations, your Yun is now fully operational for this project!

 

Setting up your Dropbox account

It's now time to set up your Dropbox account. First, make sure that you actually have an account by simply visiting the Dropbox website. Then, we'll need to create a Dropbox app. For this, go to https://www.dropbox.com/developers/apps.

 

  • Then, you can create a new app with the Create app button:
  • Now, in the parameters of the app, there are two things that you need: App key and App secret. You can find them both on the same page:
  • Once you have these, you can move to the next step and configure your Temboo account.

 

Setting up your Temboo account

We are going to use the Temboo service to link our hardware to the Dropbox app that we just created. This will allow us to upload files to Dropbox.

 

You first need to set up a new Temboo account from the following URL: https://www.temboo.com/library/.

Then, we need to actually authorize our Temboo account (and therefore, our Arduino project) in order to use your Dropbox app. For this, go to https://www.temboo.com/library/Library/Dropbox/OAuth/InitializeOAuth/.

 

You will be asked to enter your Dropbox App key and App secret: Once you click on Run, there are two things that you'll need to do. First, you need to follow the link that is given to you by Temboo:

 

After that, you'll need to grab CallbackID and OAuthTokenSecret and go to the page at https://www.temboo.com/library/Library/Dropbox/OAuth/ FinalizeOAuth/.

 

On this page, you can enter all the information that you have received so far: You'll be then given access token and token secret, which you will need for the spy camera project:

 

Note that the Dropbox API is subject to change in the future. Therefore, always check the Temboo page and follow the instructions given there if they are different from the ones presented in this blog.

 

You just need one more thing from Temboo; some data about your account. Go to https://www.temboo.com/account/applications/. There you can see the information about the Application that you created when opening your account: Keep this open, you'll also need this information later.

 

Saving pictures to Dropbox

pictures

Finally, we are going to make our first application using the hardware that we built. There will be two parts here: an Arduino sketch and a Python script.

 

The Arduino sketch will be in charge of taking a picture in case motion is detected and call the Python script. The Python script will actually upload the pictures to Temboo every time it is called by the Arduino sketch.

 

This is the complete Arduino sketch:

// Sketch to upload pictures to Dropbox when motion is detected #include <Bridge.h>
#include <Process.h>
// Picture process
Process picture;
// Filename String filename;
// Pin
int pir_pin = 8;
// Path
String path = "/mnt/sda1/";
void setup() {
// Bridge Bridge.begin();
// Set pin mode pinMode(pir_pin,INPUT);
}
void loop(void)
{
if (digitalRead(pir_pin) == true) {
// Generate filename with timestamp filename = ""; picture.runShellCommand("date +%s"); while(picture.running());
while (picture.available()>0) {
char c = picture.read();
filename += c;
}
filename.trim();
filename += ".png";
// Take picture
picture.runShellCommand("fswebcam " + path + filename + " -r 1280x720");
while(picture.running());
// Upload to Dropbox
picture.runShellCommand("python " + path + "upload_picture.py " + path + filename);
while(picture.running());
}
}

Let's see what the most important parts of this sketch are. First, you need to include the required libraries:

  • #include <Bridge.h>
  • #include <Process.h>

Then, we will define the path to the SD card, which is where the pictures will be stored: String path = "/mnt/sda1/"; After that, we will initialize the Bridge instance that will allow us to use the Yun filesystem, for example: Bridge.begin();

 

Still in the setup() function of the sketch, we will set the motion sensor pin as an input: pinMode(pir_pin,INPUT); After that, in the loop() function of the sketch, we will check whether the motion sensor detected motion: if (digitalRead(pir_pin) == true) {

If that's the case, we will first build a filename for the new picture using the current date and time:

filename = "";
picture.runShellCommand("date +%s");
while(picture.running());
while (picture.available()>0) {
char c = picture.read();
filename += c;
}
filename.trim();
filename += ".png";
Then, we will use the fswebcam utility to save this picture on the SD card:
picture.runShellCommand("fswebcam " + path + filename + " -r 1280x720");
while(picture.running());
And finally, we will call the Python script to actually upload the picture to Dropbox:
picture.runShellCommand("python " + path + "upload_picture.py " + path
+ filename);
while(picture.running());
Now, let's see the Python script. The following is the complete script:
# coding=utf-8
# Script to upload files to Dropbox
# Import correct libraries
import base64
import sys
from temboo.core.session import TembooSession
from temboo.Library.Dropbox.FilesAndMetadata import UploadFile
print str(sys.argv[1])
# Encode image
with open(str(sys.argv[1]), "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
# Declare Temboo session and Choreo to upload files
session = TembooSession('yourSession', 'yourApp', 'yourKey')
uploadFileChoreo = UploadFile(session)
# Get an InputSet object for the choreo uploadFileInputs = uploadFileChoreo.new_input_set()
# Set inputs
uploadFileInputs.set_AppSecret("yourAppSecret")
uploadFileInputs.set_AccessToken("yourAccessToken")
uploadFileInputs.set_FileName(str(sys.argv[1]))
uploadFileInputs.set_AccessTokenSecret("yourTokenSecret")
uploadFileInputs.set_AppKey("yourAppKey")
uploadFileInputs.set_FileContents(encoded_string)
uploadFileInputs.set_Root("sandbox")
# Execute choreo
uploadFileResults = uploadFileChoreo.execute_with_ results(uploadFileInputs)
Now, let's see the most important parts of this script. We will first import the
Temboo libraries:
import base64
import sys
from temboo.core.session import TembooSession
from temboo.Library.Dropbox.FilesAndMetadata import UploadFile
You will also need to set up your Temboo account details:
session = TembooSession('yourSession', 'yourApp', 'yourKey')
uploadFileChoreo = UploadFile(session)
Then, we will create a new set of inputs for the Dropbox library:
uploadFileInputs = uploadFileChoreo.new_input_set()
After that, this is where you will need to enter all the keys that we got from Dropbox and Temboo:
uploadFileInputs.set_AppSecret("yourAppSecret")
uploadFileInputs.set_AccessToken("yourAccessToken")
uploadFileInputs.set_FileName(str(sys.argv[1]))
uploadFileInputs.set_AccessTokenSecret("yourTokenSecret")
uploadFileInputs.set_AppKey("yourAppKey")
uploadFileInputs.set_FileContents(encoded_string)
uploadFileInputs.set_Root("sandbox")
Finally, we will execute the uploading of the file to Dropbox:
uploadFileResults = uploadFileChoreo.execute_with_ results(uploadFileInputs)

 

It's now the time to test the project! Note that you can find all the code from the GitHub repository at https://github.com/marcoschwartz/arduino-secret-agents. Then, make sure that you modified the Python files with your own data.

Also, put the SD card in your computer again using an adapter. Place the file and the temboo SDK on the SD card.

 

After that, place the SD card back in your Yun. Open the Arduino sketch with the Arduino IDE and make sure that you have selected the Arduino Yun board. Now, upload the sketch to the board. 

 

Now, try to move your hand in front of the motion sensor; the sensor should go red and you should also notice that the camera becomes active immediately (there is a little green LED on the Logitech C270 camera).

 

You can also check your Dropbox account in the Applications folder. There should be a new folder that has been created, which contains the pictures taken by the spy camera:

 

Congratulations, you now built your first spy camera project! Note that this folder can, of course, be accessed from anywhere in the world so even if you are on the other side of the town, you can monitor what's going on in the room where the camera is sitting.

 

Live streaming from the spy camera

spy_camera

We are now going to end this blog with a shorter project: using the camera to stream live video in a web browser. This stream will be accessible from any device connected to the same Wi-Fi network as Yun.

 

To start with this project, log in to your Yun using the following command (by changing the name of the board with the name of your Yun): ssh root@arduinoyun.local

Then, type the following: mjpg_streamer -i "input_uvc.so -d /dev/video0 -r 640x480 -f 25" -o "output_http.so -p 8080 -w /www/webcam" &. This will start the streaming from your Yun. You can now simply go the URL of your Yun, and add :8080 at the end, for example, http://arduinoyun.local:8080.

 

You will arrive at the streaming interface:

streaming interface

You can now stream this video live to your mobile phone or any other device in the same network. It's the perfect project to spy on a room while you are sitting outside, for example.

 

Monitoring Secret Data from Anywhere

In this blog, we are going to build a project that will continuously record data from sensors, and send this data over Wi-Fi so it's accessible from any web browser. This is great for a secret agent that wants to monitor a room remotely, without being seen. You'll of course be able to adapt the project with your own sensors, depending on what you want to record.

 

To do so, these are the steps we are going to take in this blog:

  • We will use Arduino along with the CC3000 Wi-Fi chip, which is quite convenient to give Wi-Fi connectivity to Arduino projects.
  • We will send sensor data to an online service called dweet.io, and then display the result on a dashboard using Freeboard.io.
  • Finally, we'll also see how to set up automated alerts based on the recorded data. Let's dive in!

 

Hardware and software requirements

Hardware and software

First, let's see what the required components are for this project.

We'll, of course, use an Arduino Uno as the brain of the project. For Wi-Fi connectivity, we are going to use a CC3000 breakout board from Adafruit. We'll also use a bunch of sensors to illustrate the behavior of the project: a DHT11 sensor for temperature and humidity, a photocell for light levels, and a motion sensor.

 

Finally, here is a list of all the components that we will use in this project:

Arduino Uno (https://www.sparkfun.com/products/11021)
CC3000 breakout board (http://www.adafruit.com/product/1469)
DHT11 sensor with 4.7k Ohm resistor (http://www.adafruit.com/ product/386)
Photocell (https://www.sparkfun.com/products/9088)
10k Ohm resistor (https://www.sparkfun.com/products/8374)
PIR motion sensor (http://www.adafruit.com/product/189)
Breadboard (https://www.sparkfun.com/products/12002)
Jumper wires (https://www.sparkfun.com/products/8431)

On the software side, you need the latest version of the Arduino IDE. You will also need the following libraries:

  • Adafruit CC3000 library
  • Adafruit DHT library

To install these libraries, just use the Arduino library manager.

 

Hardware configuration

Now let's assemble the different components of this project. First, connect the power. Connect the Arduino Uno 5V to the red power rail on the breadboard, and the GND pin to the blue power rail.

 

Also, place all the main components on the breadboard. After that, for the DHT11 sensor, follow the instructions given by the schematic to connect the sensor to the Arduino board. Make sure you don't forget the 4.7k Ohm resistor between the VCC and signal pins.

 

We are now going to connect the photocell. Start by placing the photocell on the breadboard in series with the 10k Ohm resistor. After that, connect the other end of the photocell to the red power rail, and the other pin of the resistor to the blue power rail. Finally, connect the pin between the photocell and the resistor to Arduino Uno pin A0.

 

Finally, for the motion sensor, connect the VCC pin to the red power rail, the GND pin to the blue power rail, and finally the output pin of the sensor to Arduino pin 7.

 

Now, we are going to connect the CC3000 breakout board. Connect the pins as indicated on the schematic: IRQ to pin number 3 of the Arduino board, VBAT to pin 5, and CS to pin 10.

 

After that, connect these pins to the Arduino board: MOSI, MISO, and CLK go to pins 11, 12, and 13, respectively. Finally, connect the power to the CC3000 breakout: connect 5V to the Vin pin of the breakout board, and GND to GND.

 

Congratulations, your project is now fully assembled! You can move on to the next part: sending data to the cloud.

 

Sending data to dweet.io

The first step in this project is really to send data to the web so it is stored online. For this, we'll use a service called dweet.io. You can check it out at https://dweet.io/. This is the complete Arduino code for this project:

Send data

// Libraries
#include <Adafruit_CC3000.h>
#include <SPI.h>
#include "DHT.h"
#include <avr/wdt.h>
// Define CC3000 chip pins #define ADAFRUIT_CC3000_IRQ 3 #define ADAFRUIT_CC3000_VBAT 5
#define ADAFRUIT_CC3000_CS 10
// DHT sensor
#define DHTPIN 6 #define DHTTYPE DHT11
// Create CC3000 instances
Adafruit_CC3000 cc3000 = Adafruit_CC3000(ADAFRUIT_CC3000_CS, ADAFRUIT_ CC3000_IRQ, ADAFRUIT_CC3000_VBAT,
Monitoring Secret Data from Anywhere
SPI_CLOCK_DIV2); // you can
change this clock speed
// DHT instance
DHT dht(DHTPIN, DHTTYPE);
// WLAN parameters #define WLAN_SSID #define WLAN_PASS #define WLAN_SECURITY
"yourWiFiSSID"
"yourWiFiPassword"
WLAN_SEC_WPA2
// Dweet parameters
#define thing_name "mySecretThing"
// Variables to be sent int temperature;
int humidity; int light; int motion;
uint32_t ip;
void setup(void)
{
// Initialize Serial.begin(115200);
Serial.println(F("\nInitializing..."));
if (!cc3000.begin())
{
Serial.println(F("Couldn't begin()! Check your wiring?")); while(1);
}
// Connect to WiFi network Serial.print(F("Connecting to WiFi network ...")); cc3000.connectToAP(WLAN_SSID, WLAN_PASS, WLAN_SECURITY); Serial.println(F("done!"));
/* Wait for DHCP to complete */
Serial.println(F("Request DHCP"));
while (!cc3000.checkDHCP())
{
delay(100);
}
// Start watchdog
wdt_enable(WDTO_8S);
}
void loop(void)
{
// Measure from DHT
float t = dht.readTemperature();
float h = dht.readHumidity();
temperature = (int)t;
humidity = (int)h;
// Measure light level
float sensor_reading = analogRead(A0);
light = (int)(sensor_reading/1024*100);
// Get motion sensor reading motion = digitalRead(7); Serial.println(F("Measurements done"));
// Reset watchdog
wdt_reset();
// Get IP
uint32_t ip = 0;
Serial.print(F("www.dweet.io -> "));
while (ip == 0) {
if (! cc3000.getHostByName("www.dweet.io", &ip))
{
Serial.println(F("Couldn't resolve!"));
}
delay(500);
}
cc3000.printIPdotsRev(ip);
Serial.println(F(""));
// Reset watchdog wdt_reset();
// Check connection to WiFi Serial.print(F("Checking WiFi connection ...")); if(!cc3000.checkConnected()){while(1){}}

 

Monitoring Secret Data from Anywhere

Monitoring

Serial.println(F("done."));
wdt_reset();
// Send request
Adafruit_CC3000_Client client = cc3000.connectTCP(ip, 80); if (client.connected()) {
Serial.print(F("Sending request... "));
client.fastrprint(F("GET /dweet/for/"));
client.print(thing_name);
client.fastrprint(F("?temperature="));
client.print(temperature);
client.fastrprint(F("&humidity="));
client.print(humidity);
client.fastrprint(F("&light="));
client.print(light);
client.fastrprint(F("&motion="));
client.print(motion);
client.fastrprintln(F(" HTTP/1.1"));
client.fastrprintln(F("Host: dweet.io")); client.fastrprintln(F("Connection: close")); client.fastrprintln(F(""));
Serial.println(F("done."));
} else { Serial.println(F("Connection failed")); return;
}
// Reset watchdog wdt_reset();
Serial.println(F("Reading answer..."));
while (client.connected()) {
while (client.available()) {
char c = client.read();
Serial.print(c);
}
}
Serial.println(F(""));
// Reset watchdog
wdt_reset();
// Close connection and disconnect client.close(); Serial.println(F("Closing connection")); Serial.println(F(""));
// Reset watchdog & disable
wdt_reset();
}

 

Now let's look at the most important parts of the code. First, we need to include the required libraries, such as the CC3000 library and the DHT library:

#include <Adafruit_CC3000.h>
#include <SPI.h>
#include "DHT.h"
#include <avr/wdt.h>
Then, we define which pin the DHT11 sensor is connected to:
#define DHTPIN 6
#define DHTTYPE DHT11
You also need to enter your Wi-Fi name and password:
#define WLAN_SSID "yourWiFiSSID"
#define WLAN_PASS "yourWiFiPassword"
#define WLAN_SECURITY WLAN_SEC_WPA2
Then, you can define a name for your thing that is, the virtual object that will store the data online:
#define thing_name "mySecretThing"
In the setup() function of the sketch, we initialize the CC3000 chip:
if (!cc3000.begin())
{
Serial.println(F("Couldn't begin()! Check your wiring?")); while(1);
}
We also connect to the Wi-Fi network:
cc3000.connectToAP(WLAN_SSID, WLAN_PASS, WLAN_SECURITY);

 

Finally, we initialize the watchdog to 8 seconds. This will basically reset the Arduino automatically if we don't refresh it before this delay. It is basically here to prevent the project from getting stuck: wdt_enable(WDTO_8S);

 

In the loop() function of the sketch, we first measure data from the DHT sensor:

 

float t = dht.readTemperature();
float h = dht.readHumidity();
temperature = (int)t;
humidity = (int)h;
After that, we measure the ambient light level:
float sensor_reading = analogRead(A0);
light = (int)(sensor_reading/1024*100);
And finally we get the status of the motion sensor:
motion = digitalRead(7);
Then, we try to get the IP address of the dweet.io website:
while (ip == 0) {
if (! cc3000.getHostByName("www.dweet.io", &ip)) { Serial.println(F("Couldn't resolve!"));
}
delay(500);
}
Then, we connect the project to this IP address:
Adafruit_CC3000_Client client = cc3000.connectTCP(ip, 80);
We can now send the data, following the format given by dweet.io:
client.fastrprint(F("GET /dweet/for/"));
client.print(thing_name);
client.fastrprint(F("?temperature="));
client.print(temperature);
client.fastrprint(F("&humidity="));
client.print(humidity);
client.fastrprint(F("&light="));
client.print(light);
client.fastrprint(F("&motion="));
client.print(motion);
client.fastrprintln(F(" HTTP/1.1"));
After that, we read the answer from the server:
while (client.connected()) {
while (client.available()) {
char c = client.read();
Serial.print(c);
}
}
And we close the connection:
client.close();

 

It's now time to test the project! Make sure you grab all the code, copy it inside the IDE, and change the Wi-Fi details and thing name. Then, upload the code to the board and open the Serial monitor. 

 

You should see that the project is sending measurements to dweet.io and then getting an answer. The most important part is the one indicating that data was recorded:

{"this":"succeeded","by":"dweeting","the":"dweet","with":{"thing":"mySecretThing","created":"2015-09-03T09:38:07.051Z","content":{"temperatu re":28,"humidity":32,"light":87,"motion":0}}}

You can also check online to make sure data was recorded. Now that we are sure that data is being recorded, we can move to the next step: spying on this data remotely from any web browser!

 

Monitoring the device remotely

remotely

We are now going to see how to access the data stored on dweet.io and display it graphically. For that, we are going to use a website called freeboard.io. You can go there with the following URL http://freeboard.io/.

 

This is the main welcome screen where you need to create an account: Once you have an account, you can create a new board: Once this is done, you should be redirected to a similar page showing an empty board:

 

First, we need to set a data source, meaning that we need to tell Freeboard to get data from the dweet thing we are storing the data in. Add a new data source and fill out the fields, with the name of the thing that stores your data of course:

 

After that, you will see the data source appearing at top of your board, with the last update date. It's now time to add some graphical elements to our dashboard. We'll first add one for the temperature. Click on a new pane, which will create a new block inside the dashboard:

 

Then, click on the little + sign to create a new widget. Here, we are going to use a gauge widget for the temperature. You should immediately see the gauge for the temperature on your board:

 

Now, let's do the same for humidity. You can also do the same for the ambient light level. You now have all the data from these sensors refreshed in near real-time on your board:

 

The last thing we need to put in is the motion sensor. As it's an on/off sensor, I used an indicator widget for this: Now try to pass your hand in front of the sensor. You should immediately see that the indicator changes its color. Congratulations, you now have a dashboard that you can access at any time to spy on this data!

 

Creating automated e-mail alerts

e-mail

There is one more thing we can do with our project. Spying on the data is good, but we are not always behind a computer. For example, we would like to receive an alert via e-mail if the motion is detected by the project.

 

Dweet.io proposes this kind of service, at a very cheap price (less than $1 a month). To do this, you need to make your device private with a lock. This is basically a key you can get from https://dweet.io/locks.

 

Once we have the key, we can actually set our alert. The dweet.io website explains this very well:

 

To set up an alert, simply go to the following URL by modifying the different parameters with the desired parameters https://dweet.io/alert/youremail@ yourdomain.com/when/yourThing/dweet.motion==1?key=yourKey. 

 

Once that's done, you will automatically receive an alert whenever motion is detected by your project, even if you are not actually watching the data!

Recommend