<html><head></head><body><div style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:13px;"><div></div>
<div>I have both the google AIY and the movidius stick. I am working on getting them to play nicely with each other.</div><div><br></div><div>Alyssa</div><div><br></div>
<div id="ydp8a89c743yahoo_quoted_2890978527" class="ydp8a89c743yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
On Thursday, November 30, 2017, 6:23:07 PM CST, Carl Ott <carl.ott.jr@gmail.com> wrote:
</div>
<div><br></div>
<div><br></div>
<div><div id="ydp8a89c743yiv3151894868"><div dir="ltr"><div>I've been waiting for an eval kit like this since CES in January, when <span style="color:rgb(43,45,50);font-family:arial, helvetica, sans-serif;">Dave Ackley and I saw</span><span style="color:rgb(43,45,50);font-family:arial, helvetica, sans-serif;"> a cool demo using the neural net based Movidius VPU - albeit that one was packaged as a USB compute stick. This version runs on a bonnet for Raspberry Pi Zero W:</span></div><div><br></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px;"><div><span style="color:rgb(43,45,50);"><i>The VisionBonnet circuit board has an Intel Movidius MA2450 low-power vision processing unit, which can run neural network models right on the device. You'll get software, too, which has three TensorFlow-based neural network models: one to recognize a thousand common objects, another that can recognize faces and expressions and a third that can detect people, cats and dogs. </i></span></div><div><span style="color:rgb(43,45,50);"><br></span></div></blockquote><font color="#2b2d32" face="arial, helvetica, sans-serif">Check this out:</font><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px;"><div><font color="#2b2d32" face="Guardian TextEgyp, serif"><a href="https://www.movidius.com/solutions/vision-processing-unit" rel="nofollow" target="_blank">https://www.movidius.com/solutions/vision-processing-unit</a></font></div><div><font color="#2b2d32" face="Guardian TextEgyp, serif"><a href="https://uploads.movidius.com/1463156689-2016-04-29_VPU_ProductBrief.pdf" rel="nofollow" target="_blank">https://uploads.movidius.com/1463156689-2016-04-29_VPU_ProductBrief.pdf</a></font></div></blockquote><div><div><br></div><div>Pre-Order from Micro-Center</div></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px;"><div><div><a href="https://www.engadget.com/2017/11/30/google-diy-ai-camera-kit-raspberry-pi/" style="font-size:12.8px;" rel="nofollow" target="_blank">https://www.engadget.com/2017/ 11/30/google-diy-ai-camera- kit-raspberry-pi/</a></div></div><div><div><a href="http://www.microcenter.com/site/content/Google_AIY.aspx?ekw=aiy&rd=1" rel="nofollow" target="_blank">http://www.microcenter.com/site/content/Google_AIY.aspx?ekw=aiy&rd=1</a></div></div></blockquote><div><div><br></div><div>Oh yeah. </div><div><br></div><div>Google:</div></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px;"><div><div><a href="https://aiyprojects.withgoogle.com/vision" rel="nofollow" target="_blank">https://aiyprojects.withgoogle.com/vision</a> </div></div></blockquote><div><div><br></div><div>I suppose, if we wanted to bring the award winning / beer finding SmartCamBot back to life with one of these, we'd have to train it to recognize bottles versus crayons, but that's a small price to pay for real-time / local image processing at this level...</div><div><br></div><div>Who else is in?</div></div><div><br></div><div>- Carl</div><div><br></div></div></div>_______________________________________________<br>DPRGlist mailing list<br><a href="mailto:DPRGlist@lists.dprg.org" rel="nofollow" target="_blank">DPRGlist@lists.dprg.org</a><br><a href="http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org" rel="nofollow" target="_blank">http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org</a><br></div>
</div>
</div></div></body></html>