サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
パリ五輪
machinethink.net
2020 is the year where machine learning on mobile is no longer the hot new thing. Adding some kind of intelligence to apps has become standard practice. Fortunately, that doesn’t mean Apple has stopped innovating. 😅 In this blog post, I’ll summarize what’s new in Core ML and the other AI and ML technologies from the Apple ecosystem. Core ML Last year was a big update for Core ML, but this year th
Over the past 18 months or so, a number of new neural network achitectures were proposed specifically for use on mobile and edge devices. It seems that pretty much everyone has figured out now that large models such as VGG16 or ResNet-50 aren’t a good idea on small devices. 😉 I have previously written about MobileNet v1 and v2, and have used these models in many client projects. But it’s 2020 and
This blog post is a lightly edited chapter from my book Core ML Survival Guide. If you’re interested in adding Core ML to your app, or you’re running into trouble getting your model to work, then check out the book. It’s filled with tips and tricks to help you make the most of the Core ML and Vision frameworks. You can find the source code for this blog post in the book’s GitHub repo. Enjoy! * * *
A few weeks ago I wrote about YOLO, a neural network for object detection. I had implemented that version of YOLO (actually, Tiny YOLO) using Metal Performance Shaders and my Forge neural network library. Since then Apple has announced two new technologies for doing machine learning on the device: Core ML and the MPS graph API. In this blog post we will implement Tiny YOLO with these new APIs. A q
Object detection is the computer vision technique for finding objects of interest in an image: This is more advanced than classification, which only tells you what the “main subject” of the image is — whereas object detection can find multiple objects, classify them, and locate where they are in the image. An object detection model predicts bounding boxes, one for each object it finds, as well as
Object detection is one of the classical problems in computer vision: Recognize what the objects are inside a given image and also where they are in the image. Detection is a more complex problem than classification, which can also recognize objects but doesn’t tell you exactly where the object is located in the image — and it won’t work for images that contain more than one object. YOLO is a clev
Let’s say you wanted to create a sweet bouncing cube, like this: You might use a 3D framework such as OpenGL or Metal. That involves writing one or more vertex shaders to transform your 3D objects, and one or more fragment shaders to draw these transformed objects on the screen. The framework then takes these shaders and your 3D data, performs some magic, and paints everything in glorious 32-bit c
This is a transcription of a talk I gave at the Dutch CocoaHeads meetup in Rotterdam in July, 2015. Some of the code is out-of-date now that we have Swift 3. However, the ideas are still valid. If you paid attention to this year’s WWDC, you’ve probably seen the session on Protocol-Oriented Programming — or at least heard about it. [Lots of nodding heads in the audience.] It’s an interesting sessio
このページを最初にブックマークしてみませんか?
『https://machinethink.net/』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く