Combine the techniques from Part 3 in a practical way to strengthen the dataset. Understand how the Keras code is used to augment a dataset in a Jupyter* Notebook running on the Intel® DevCloud.
Hey, it's Karl here with the fourth video in our Hands-On AI series. Today, we show how all of the augmentation and cleaning techniques we've discussed so far in the series come together. This is our Jupyter* Notebook, which you can download in the links provided. If you are unfamiliar with Jupyter Notebook, they are an open source web application and development tool that supports many languages and frameworks. They are a great way to share live code, equations, visualizations, and narrative text inside one source. You can also run each cell individually as you work on it, instead of the whole program.
I highly recommend them to anyone starting in data science or machine learning. You can see the earlier sections of the Jupyter Notebook go into each of the functions separately. Feel free to explore this on your own. In this video, we're going to focus on the Combination section. Here we apply all the described augmentation transformations simultaneously and see what happens. Remember the parameters of each of the transformations are chosen randomly from within the specified range. Thus, we should have a considerably diverse set of samples.
Let's initialize our image data generator with all the available options turned on and tested on an image of a red hydrant. Note that there are two methods to fill the modified space–Constant Filling and Nearest. Constant picks one value and applies it to all of the empty space. For these generated images, we're going to use a more elaborate filling mode, which is called Nearest. This mode assigns the color of an existing pixel to the nearest blank pixel. Filling is important to ensure we don't end up with unintentional black space and provide a better interpretation of what we see in the real world.
As you can see here, we specified all of our augmentations, and now, we have permeated our original image into eight different versions. Think of these as different perspectives of an object in the real world. You can also see how the nearest fill specifier fills the space as opposed to black space, which is nearly impossible to find in the real world.
By using these techniques, you are helping to ensure your model is better prepared to recognize this object as a red fire hydrant. You want your application to be robust and capable of identifying an object from multiple perspectives. Thanks for watching. Be sure to check out the links to read the article associated with the series and to learn more about AI.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804