<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Özgür Şahin on Medium]]></title>
        <description><![CDATA[Stories by Özgür Şahin on Medium]]></description>
        <link>https://medium.com/@ozgurs?source=rss-32c3e42a9823------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 27 Apr 2026 13:17:05 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@ozgurs/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Train a Face-Mask Detection Model in Under 5 Minutes using Lobe.ai]]></title>
            <link>https://heartbeat.comet.ml/train-face-mask-detection-model-under-5-minutes-using-lobe-ai-46a26b7b1f17?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/46a26b7b1f17</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[heartbeat]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[lobe]]></category>
            <category><![CDATA[coreml]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Wed, 16 Dec 2020 14:41:43 GMT</pubDate>
            <atom:updated>2021-10-11T14:32:24.373Z</atom:updated>
            <content:encoded><![CDATA[<h4>Exploring the new Lobe software by Microsoft</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UVPRaloV6zuilx8JXXekig.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@shootbyteo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Matteo Di Iorio</a> on <a href="https://unsplash.com/s/photos/discovery?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>Lobe, owned by Microsoft, is a free, no-code tool to train machine learning models without technical skills. Only image classification is supported as of this writing, and an object detection model training is coming soon according to Lobe’s homepage. You can download it <a href="https://lobe.ai/">here</a> by entering your basic info. It’s 608 mb.</p><p>Lobe is suitable to use it when you would like to keep your images private and train the model on your PC freely. You can also use it to create models for both Android (Tensorflow Lite) and iOS (Core ML).</p><p>Lobe welcomes you with a tour, documentation, and more, as you see below. Let’s create a new project and see how easy it is to use.</p><figure><img alt="The lobe.ai home screen offering community, projects, examples, etc." src="https://cdn-images-1.medium.com/max/552/1*8tIpvJpCark4vwOYwL7lcw.png" /></figure><p>As of writing this article, you can import your image dataset only with folders so you can’t use CSV files. But Lobe does allow you label your images if they are not in labeled or categorized in folders yet.</p><p>I will use this face-mask classification <a href="https://www.kaggle.com/dhruvmak/face-mask-detection">dataset</a> on Kaggle which has images categorized into two folders as with_mask and without_mask. Download this dataset and click Import and select Dataset in Lobe.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m7TN0ztswdpBsCrb0nJcKQ.png" /></figure><p>After importing your dataset, Lobe starts training immediately.</p><p>Lobe automatically splits 20% of your to dataset to test your model. Test images are a random subset of your examples are not used during training.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CIfvWjGKzZHt3-NX-59usw.png" /></figure><p>In the left panel you can see your dataset details. You can click on these folder names to check your images.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*McKS3FwH3mYBAxts682Wtw.png" /></figure><p>Lobe shows the training process live in the lower area in the left panel. There you can see how many images it predicts correctly or incorrectly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/568/1*IvtRMEUgmMYIF4LOokCJdw.png" /></figure><p>You can leave the app while training, as it notifies you with a click sound when training finishes. For this dataset -which has 440 images- training finished under five minutes. When training finishes, you can check the results and see the correct/incorrect classifications that your model made.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*P7EAYt_s0MWOVnOm_QIDPw.png" /></figure><p>Hovering over the image will show you the confidence score of the model.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/654/1*LpJnopFmF1UgmH6_3cjz_A.png" /></figure><p>To see the accuracy of your model on test images, you can select View &gt; Test Images. It will show your model’s accuracy on test images it hasn’t seen before.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fTZtijFv9yg455FRK4di9w.png" /></figure><p>In the Play section, you can drag-drop new images or take photos with your webcam. Lobe will run the trained model with this new image and you can see how good your model does with the new images.</p><p>Here, you can try to trick your model and see patterns where it is weak. You can also help improve your model by giving feedback on its predictions by clicking the checkmark button to add the image to your dataset. Lobe automatically trains the model with these new images.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MKB7YkHznZzoW06lhkNLYg.png" /></figure><p>When you’re satisfied with your model you can export it by clicking File-&gt;Export. Lobe allows exporting to variety of formats like TensorFlow Lite and Core ML.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/954/1*SZdRpGCLHIyfvEQfJg5q1A.png" /></figure><p>We’ll export our model as Core ML format for this tutorial. When you try to export, Lobe asks whether you want to optimize your model. Optimizing performs additional training and can take much longer, but it will keep training as long as model is improving.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/918/1*Xx60hF7oYwQr1iPaxe9FRg.png" /></figure><p>It exports several files: readme with a sample Swift code to run the model, your model as mlmodel and signature file which contains information about your Lobe project.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/618/1*ZEF1kRonnB-5Mi-IJ0ZeFA.png" /></figure><p>With Xcode 12+, you can test image classification models easily. Open the mlmodel file in Xcode and drag some images to the left panel. Click on the images to see how precise your model predictions are.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UqnTfP-Pl3lkAaw9W39EYA.png" /></figure><p>If you need any starter projects to use this exported model, Lobe offers very handy projects on Github. Starter projects take care of many things like opening the camera, running the model using Vision, and showing results in the UI.</p><p>For the iOS sample project, they use SwiftUI which made this author very happy. Just replace the mlmodel in this iOS sample project and you are ready to go with your new image classification app!</p><p>Check out the starter projects here:</p><p>iOS:</p><p><a href="https://github.com/lobe/iOS-bootstrap">lobe/iOS-bootstrap</a></p><p>Android:</p><p><a href="https://github.com/lobe/android-bootstrap">lobe/android-bootstrap</a></p><p>Web:</p><p><a href="https://github.com/lobe/web-bootstrap">lobe/web-bootstrap</a></p><p>Raspberry Pi:</p><p><a href="https://github.com/microsoft/trashclassifier">microsoft/TrashClassifier</a></p><h3>Conclusion</h3><p>In this post, we’ve learned how to use Microsoft’s new machine learning tool, Lobe, to train image classification models on our PC. We found a face-mask classification dataset from Kaggle and trained/tested the model using this new no-code tool. Personally, I found it very easy and fun to use Lobe. I hope more machine learning models like object detection and segmentation will come to it soon.</p><p>Thanks for reading!</p><p>If you liked this story, you can follow me on <a href="https://medium.com/@ozgurs">Medium</a> and <a href="https://twitter.com/ozgr_shn">Twitter</a>. You can contact me (for freelancing or questions, etc.) via <a href="mailto:ozgur.sahin@aol.com">e-mail</a>.</p><p><em>Editor’s Note: </em><a href="https://heartbeat.comet.ml/"><em>Heartbeat</em></a><em> is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.</em></p><p><em>Editorially independent, Heartbeat is sponsored and published by </em><a href="http://comet.ml/?utm_campaign=heartbeat-statement&amp;utm_source=blog&amp;utm_medium=medium"><em>Comet</em></a><em>, an MLOps platform that enables data scientists &amp; ML teams to track, compare, explain, &amp; optimize their experiments. We pay our contributors, and we don’t sell ads.</em></p><p><em>If you’d like to contribute, head on over to our</em><a href="https://heartbeat.fritz.ai/call-for-contributors-october-2018-update-fee7f5b80f3e"><em> call for contributors</em></a><em>. You can also sign up to receive our weekly newsletters (</em><a href="https://www.deeplearningweekly.com/"><em>Deep Learning Weekly</em></a><em> and the </em><a href="https://info.comet.ml/newsletter-signup/"><em>Comet Newsletter</em></a><em>), join us on</em><a href="https://join.slack.com/t/fritz-ai-community/shared_invite/enQtNTY5NDM2MTQwMTgwLWU4ZDEwNTAxYWE2YjIxZDllMTcxMWE4MGFhNDk5Y2QwNTcxYzEyNWZmZWEwMzE4NTFkOWY2NTM0OGQwYjM5Y2U"><em> </em></a><a href="https://join.slack.com/t/cometml/shared_invite/zt-49v4zxxz-qHcTeyrMEzqZc5lQb9hgvw"><em>Slack</em></a><em>, and follow Comet on </em><a href="https://twitter.com/Cometml"><em>Twitter</em></a><em> and </em><a href="https://www.linkedin.com/company/comet-ml/"><em>LinkedIn</em></a><em> for resources, events, and much more that will help you build better ML models, faster.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=46a26b7b1f17" width="1" height="1" alt=""><hr><p><a href="https://heartbeat.comet.ml/train-face-mask-detection-model-under-5-minutes-using-lobe-ai-46a26b7b1f17">Train a Face-Mask Detection Model in Under 5 Minutes using Lobe.ai</a> was originally published in <a href="https://heartbeat.comet.ml">Heartbeat</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Run and Test Core ML Models in Swift Playgrounds]]></title>
            <link>https://heartbeat.comet.ml/how-to-run-and-test-core-ml-models-in-swift-playgrounds-8e4b4f9cf676?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/8e4b4f9cf676</guid>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[heartbeat]]></category>
            <category><![CDATA[coreml]]></category>
            <category><![CDATA[ios-app-development]]></category>
            <category><![CDATA[mobile-ml]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Mon, 15 Jun 2020 14:03:37 GMT</pubDate>
            <atom:updated>2021-10-01T17:26:44.683Z</atom:updated>
            <content:encoded><![CDATA[<h3>How to Run and Test Core ML Models in a Swift Playground</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Youh3lJT_H34gId8k3ENdQ.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@aaronburden?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Aaron Burden</a> on <a href="https://unsplash.com/s/photos/playground?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>So you’ve trained your Core ML model in <a href="https://heartbeat.comet.ml/detecting-pneumonia-in-an-ios-app-with-create-ml-5cff2a60a3d">Create ML</a> or <a href="https://heartbeat.comet.ml/training-a-core-ml-model-with-turi-create-to-classify-dog-breeds-d10009bd30b6">Turi Create</a>. And now, you want to test it out on some real-world data. To make quick and dirty tests, you can leverage <a href="https://www.apple.com/swift/playgrounds/">Swift Playgrounds</a> and run Core ML models there. If you’re satisfied with the results, you can move over to Xcode project and run on-device or on a simulator.</p><p>In this tutorial, we’ll test a Core ML model in Swift Playgrounds.</p><p>Before beginning, I have a bit of bad news: You cannot directly use Core ML (.mlmodel) files in the Playground (there is an another option I will mention later in this post). Instead, you have to create its class and compiled mlmodelc files to use it.</p><p>Let see how to do that!</p><p>Drag and drop you Core ML model into an Xcode project and build your project:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IzRL_Gicl_DDGg6p2V6GBA.png" /></figure><p>After a successful build, right click on your .app file under the “Products” folder:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/830/1*UUiTKT7sjcFswbdYmBYkqQ.png" /></figure><p>Right click your app and select “Show Package Contents”, which will open your bundle as a folder:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*OMg1xVB2TDlNxo5uVXo-kQ.png" /></figure><p>Find the mlmodelc folder. It will be named according to your Core ML file.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/814/1*bNuwOjwvVslNu18YCgJcPA.png" /></figure><blockquote>If you don’t want to compile and copy mlmodelc files there is another option too:</blockquote><blockquote>You can replace model loading method in generated Swift file. You need to copy your Core ML model into Resources folder. Find <strong>urlOfModelInthisBundle</strong> variable and change it like below. This lets Playground compile the model and return the path of the compiled model.</blockquote><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6c1a75638e7d2785577a7effcc7247e7/href">https://medium.com/media/6c1a75638e7d2785577a7effcc7247e7/href</a></iframe><p>Create a Playground and make sure the “Navigator” is shown by clicking the button shown below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/854/1*OLqnRUjh5bCoVmMzfWyWKA.png" /></figure><p>Drag-drop this folder into the “Resources” folder in the Playground:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/978/1*B1bj-tfRBghj9d2kDVhDjQ.png" /></figure><p>We also need to provide the Xcode generated Swift model class to the Playground. Open the Xcode project and select your mlmodel file and click the arrow next to it (below) to open this auto-generated class:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m4ZqHDyuynQjBBZhhkrgBg.png" /></figure><p>After opening this class, locate it in Finder by right clicking and then choose “Navigate-&gt;Show in Finder”:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HJ214E9O4bwFUg0CtANcGw.png" /></figure><p>Drag-drop this file into the “Sources” folder in the Playground. This will let us use our model through this auto-generated class:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*c5w_byTZBRvuE1lp5AhqNQ.png" /></figure><p>If you try to create an instance of your model and run inference, you’ll get the error below:</p><blockquote><strong>error: ThermDetector.playground:5:44: error: ‘model’ is inaccessible due to ‘internal’ protection level</strong></blockquote><p>The class definitions in this auto-generated Swift file have <strong>internal </strong>protection levels for classes and variables<strong>.</strong> You have to make them public to use them.</p><p>Open your model file and add a public keyword to the classes, functions, and variables:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/952/1*TR006qaN2TyaGhHyq07DUg.png" /></figure><p>After fixing protection levels, run your model, and you’ll see prediction results in Playgrounds:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tXyzt69ZMJOO4DsAqHkBXA.png" /></figure><h3>Conclusion</h3><p>In this post, we learned how to run Core ML models in Playgrounds, which make it easy to experiment with pre-trained models. This is yet another example of Apple lowering the ML entrance barrier for iOS developers. We should see a lot announced at the upcoming WWDC (June 22–26, virtual event)!</p><p>You can check out some Core ML playground samples shared on Github:</p><ul><li><a href="https://github.com/omaralbeik/mnist-coreml">Handwritten digit recognition in Playgrounds</a></li><li><a href="https://github.com/dealforest/CoreMLEasyTryPlayground">This repo is a collection of Playgrounds that you can easily try when experimenting with Core ML.</a></li></ul><p>Thanks for reading!</p><p>If you liked this story, you can follow me on <a href="https://medium.com/@ozgurs">Medium</a> and <a href="https://twitter.com/ozgr_shn">Twitter</a>. You can contact me via <a href="mailto:ozgur.sahin@aol.com">e-mail</a>.</p><p><em>Editor’s Note: </em><a href="https://heartbeat.comet.ml/"><em>Heartbeat</em></a><em> is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.</em></p><p><em>Editorially independent, Heartbeat is sponsored and published by </em><a href="http://comet.ml/?utm_campaign=heartbeat-statement&amp;utm_source=blog&amp;utm_medium=medium"><em>Comet</em></a><em>, an MLOps platform that enables data scientists &amp; ML teams to track, compare, explain, &amp; optimize their experiments. We pay our contributors, and we don’t sell ads.</em></p><p><em>If you’d like to contribute, head on over to our</em><a href="https://heartbeat.fritz.ai/call-for-contributors-october-2018-update-fee7f5b80f3e"><em> call for contributors</em></a><em>. You can also sign up to receive our weekly newsletters (</em><a href="https://www.deeplearningweekly.com/"><em>Deep Learning Weekly</em></a><em> and the </em><a href="https://info.comet.ml/newsletter-signup/"><em>Comet Newsletter</em></a><em>), join us on</em><a href="https://join.slack.com/t/fritz-ai-community/shared_invite/enQtNTY5NDM2MTQwMTgwLWU4ZDEwNTAxYWE2YjIxZDllMTcxMWE4MGFhNDk5Y2QwNTcxYzEyNWZmZWEwMzE4NTFkOWY2NTM0OGQwYjM5Y2U"><em> </em></a><a href="https://join.slack.com/t/cometml/shared_invite/zt-49v4zxxz-qHcTeyrMEzqZc5lQb9hgvw"><em>Slack</em></a><em>, and follow Comet on </em><a href="https://twitter.com/Cometml"><em>Twitter</em></a><em> and </em><a href="https://www.linkedin.com/company/comet-ml/"><em>LinkedIn</em></a><em> for resources, events, and much more that will help you build better ML models, faster.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8e4b4f9cf676" width="1" height="1" alt=""><hr><p><a href="https://heartbeat.comet.ml/how-to-run-and-test-core-ml-models-in-swift-playgrounds-8e4b4f9cf676">How to Run and Test Core ML Models in Swift Playgrounds</a> was originally published in <a href="https://heartbeat.comet.ml">Heartbeat</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Figma ile App Store Ekran Görüntüsü Nasıl Oluşturulur?]]></title>
            <link>https://medium.com/nsistanbul/figma-ile-app-store-ekran-g%C3%B6r%C3%BCnt%C3%BCs%C3%BC-nas%C4%B1l-olu%C5%9Fturulur-860b5d4fb289?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/860b5d4fb289</guid>
            <category><![CDATA[app-store]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[figma]]></category>
            <category><![CDATA[mobil-uygulama]]></category>
            <category><![CDATA[tasarım]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Sun, 17 May 2020 07:54:22 GMT</pubDate>
            <atom:updated>2020-05-18T07:38:29.105Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*__X3bKVOzvoOcgAHufmWlw.png" /></figure><p>Bu yazıda App Store uygulama görüntülerini Figma ile nasıl kolayca oluşturabiliriz onu anlatacağım. Öncelikle şu figma <a href="https://github.com/jtholloran/shots/blob/master/AppStore-Shots-4.0.fig">template’ini</a> indirin. Sağ olsun <a href="https://github.com/jtholloran">Josh Holloran</a> uğraşmış, tasarlamış ve Github’ta paylaşmış. Repo’ya star vermeyi unutmayın :)</p><p><a href="https://www.figma.com/">https://www.figma.com/</a> a girin ve oraya bu indirdiğiniz template’i sürükleyip bırakın. Açtıktan sonra ilk sayfadan talimatları okuyabilirsiniz ve ya direkt 2. sayfaya gelip uygulama ekran görüntüsünü yükleyin ve sağdaki çeşitli iPhone ekranları için boyutlandırın ve içlerine ekleyin. Görüntünüzü ‘drop your artwork here’ yazan bölüme eklemeye dikkat edin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8MmWyUKTDlw4dl1rMEPmsw.png" /></figure><p>Alt tarafta yazılarınızı ve arkaplan renklerini ayarlayın.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/990/1*Ul2npsJwqs3GLwRAMI5jBg.png" /></figure><p>3. Preview sayfasına gelin.</p><p>iPhone modelini değiştirmek isterseniz alttaki Choose an iPhone bölümünü seçip sağ taraftan seçim yapın ve ekran görüntüleriniz hazır.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H4cvoK0ITCPW6vAs2GnNRw.png" /></figure><p>CMD + Shift + E ile hepsini export edin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sA7Qo1pt9gtQ1EY3wIQwQQ.png" /></figure><p>İşlem bu kadar. John, App Store ikonları için de template hazırlamış, kullanmak isterseniz o da <a href="https://github.com/jtholloran/shots">burada</a>. Eline sağlık John!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=860b5d4fb289" width="1" height="1" alt=""><hr><p><a href="https://medium.com/nsistanbul/figma-ile-app-store-ekran-g%C3%B6r%C3%BCnt%C3%BCs%C3%BC-nas%C4%B1l-olu%C5%9Fturulur-860b5d4fb289">Figma ile App Store Ekran Görüntüsü Nasıl Oluşturulur?</a> was originally published in <a href="https://medium.com/nsistanbul">NSIstanbul</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hızlı Şekilde Resim Dataseti Oluşturma]]></title>
            <link>https://medium.com/@ozgurs/h%C4%B1zl%C4%B1-%C5%9Fekilde-resim-dataseti-olu%C5%9Fturma-cfccf4a40c79?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/cfccf4a40c79</guid>
            <category><![CDATA[makine-öğrenmesi]]></category>
            <category><![CDATA[dataset]]></category>
            <category><![CDATA[derin-ogrenme]]></category>
            <category><![CDATA[yazılım]]></category>
            <category><![CDATA[yapay-zeka]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Fri, 15 May 2020 10:45:25 GMT</pubDate>
            <atom:updated>2020-05-18T13:31:46.260Z</atom:updated>
            <content:encoded><![CDATA[<h3>Google Görsellerden Resim Dataseti Oluşturma</h3><p>Basit Deep Learning projeleri için bazen hızlıca resim dataseti oluşturmanız gerekebilir. Bu anlarda Google Images, fastai kütüphanesi ve biraz scriptle bu işi çözebilirsiniz.</p><p>Not: Resim çeşitliğini artırmak için ufak bir ipucu da farklı dillerde arama yapmak</p><p>Benim bir yan projem için kara kalem resimlerinden bir dataset oluşturmam gerekti mesela.</p><p>Aşağıdaki gibi aramalar yaptım.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oa6uVgxa0R49204ZWvJYDg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V9P7MgD5b8eSWt6lyDNv5A.png" /></figure><p>Burada Google tarafından önerilen kelimeleri de kullanabilirsiniz.</p><p>Arama yaptıktan sonra sayfanın el altına kadar inin ve chrome developer tool’u açın konsola aşağıdaki kodu yapıştırın.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UmBAvOro0HF5sPsgZTv3ww.png" /></figure><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/695fccae8309d559d730ea51fceac548/href">https://medium.com/media/695fccae8309d559d730ea51fceac548/href</a></iframe><p>Bu size CSV olarak resimlerin url’leri oluşturacak ve dosyayı açacak.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SyK6gZDGjGqaa0VM_kfbyQ.png" /></figure><p>Bu csv dosyasını kaydedin.</p><p>Oluşturduğum Google Colab <a href="https://colab.research.google.com/drive/1qiP9td3i1ylt5e6pU2oh5qQVI1sRyt5g?usp=sharing">dosyasını</a> açın. Soldaki dosya paneline csv dosyasınızı sürükleyip bırakın.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/738/1*L8V-BFyuf9CCqHp9qOZVCQ.png" /></figure><p>Daha sonra run butonlarına basarak satırları çalıştırın.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fGOoxNyWybzxLUthaLt1Og.png" /></figure><p>Dosyalarınız images klasörüne indirilecektir.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/648/1*6TIqBIPA8npu3yk7WgPclA.png" /></figure><p>Resimleriniz hazır!</p><p>Google Images yeterli olmazsa DuckDuckGo’yu da kullanabilirsiniz. <a href="https://github.com/Adrianbakke/Extract-images-ddg/blob/master/exampel_workflow.ipynb">Şurada</a> kullanım örneği mevcut.</p><p>Not: Bu arada fastai image downloader bölümünde boş stringler için hata veriyordu onu da bu <a href="https://github.com/fastai/fastai/commit/4e630af8b416e8ea1553f12f3d0dcd4a7fd7ea86">PR</a> ile düzelttim.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cfccf4a40c79" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bu ayın ProductHunt’ı Mardinden Çıktı]]></title>
            <link>https://medium.com/bili%C5%9Fim-hareketi/bu-ay%C4%B1n-producthunt%C4%B1-mardinden-%C3%A7%C4%B1kt%C4%B1-be0a3edba91d?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/be0a3edba91d</guid>
            <category><![CDATA[teknoloji]]></category>
            <category><![CDATA[ekonomi]]></category>
            <category><![CDATA[türkiye]]></category>
            <category><![CDATA[girişim]]></category>
            <category><![CDATA[yazılım]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Wed, 26 Feb 2020 14:44:09 GMT</pubDate>
            <atom:updated>2020-03-04T21:04:55.067Z</atom:updated>
            <content:encoded><![CDATA[<h3>Bu ayın ProductHunt’ı Mardin’den Çıktı</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-ZMpzzN2zdHiWjtGeeswog.png" /></figure><p>Yürekten dökülen, düşünmeden yazılan yazıların nerelere ulaşacağını kestiremezsiniz. Bu da öyle bir yazı!</p><p>Uzun süredir aklımı kurcalayan bir durum var. Ülkemdeki artan fakirliği gördükçe benim elimden ne gelir, insanların refahını nasıl artırabilirim diye düşünüyorum. Aklıma gelenleri yazıya dökmek istedim.</p><p>Öncelikle bir kültür inşası gerekiyor, kolaycılığı, torpili bırakıp üretimi, paylaşımı, çalışmayı, yeni şeyler öğrenmeyi teşvik eden rekabeti savunan bir kültür.</p><p>Dijital ürün geliştirip tüm dünyaya satmak bu kadar kolaylaşmışken biz neden pazardan yeterli payımızı almıyoruz?</p><p>Neden her bölgemiz kendi arasında yarışıp yeni mobil uygulamalar yazmıyoruz? İstiyorum ki bağımsız geliştiriciler çıksın, ben şuradanım bu ay bunu yaptım, şu kişilerin desteğiyle ilk yüz ücretli uygulamalar listesine ulaştım desinler, hep beraber kutlayalım. Bunları ödüllendirelim, teşvik edelim. Devleti destek sağlaması için zorlayalım, kamuoyu oluşturalım. Sosyal girişimler başlatalım, topluluklar oluşturalım. Şucu bucu mentalitesinden vazgeçip birbirini destekleyen, birbirinin sorununu çözen, tüm dünyanın sorunlarını çözen topluluk mantığına geçelim. Daha çok döviz sokalım ülkeye, hep beraber refahımız arttırmaya çalışalım.</p><p><strong>2019&#39;da AppStore’da insanlar 120 milyar dolar harcadı. Mobil reklam harcamaları ise 190 milyar dolara ulaştı!</strong></p><p>Amerika, Japonya, Çin mobil uygulama marketlerine ürünler üretelim oradaki insanların sorunlarını çözelim, oyunlar geliştirip eğlendirelim.</p><p>En yüksek payların olduğu Amerika, İngiltere gibi marketleri hedefleyelim mobil uygulama AppStore’a uygulama yazan geliştiriciler yarışıp ekosistem oluşturup gittikçe daha iyisini üreten daha paylaşımcı birbirini geliştiren bir ülke kültürü neden yaratmıyoruz? Herkes ne yapabiliyorsa elinden geleni yapsın hep beraber elimizi taşın altına sokalım.</p><p>Oyun geliştirme kültürü ve eğitimler düzenleyen birbirini destekleyen gruplar oluşturalım, yarıştıralım, gündem haline getirelim, televizyonlara sokalım, ilgiyi buraya çekelim. Bağımsız ürün geliştirip düzenli gelir kaynağı oluşturmayı herkese-çoluk çocuğa teşvik edelim. Ev hanımları için makine öğrenmesine derin öğrenmeye evden veri sınıflandırma, veri etiketleme yaptıracağımız sistemler tasarlayalım. Bir topluluk oluşturalım. Sonuna kadar rekabet edelim ve hep beraber gelişelim.</p><p>Bu ay Konya’dan bir geliştirici ABD Appstore’da ilk 50&#39;ye ulaşsın. Growth tekniklerini paylaşsın anlatsın. Ondan ilham alan başkalarına bu işler sıçrasın daha iyileri gelişsin Yozgat’dan bir geliştirici Çin’de büyük ivme yakalayan bir uygulama çıkartsın. Paylaşsın anlatsın örnek olsun. <a href="https://twitter.com/sarperdag">Sarp Erdağ</a> ve <a href="https://twitter.com/ckaptanoglu">Çağatay Kaptanoğlu</a> gibi uygulamalar geliştirerek milyon dolarlara yurtdışına satalım.</p><p>Her ilimizi bir özelliğiyle ön plana çıksın şu il growthta çok iyi şu il tasarımda efsaneler çıkardı diyelim. Şu ilden hep şu tarz web appler çıktı yürüdü gitti abi diyelim. Hackerone’da bug-bounty kovalayan gençleri teşvik edelim, bilgi paylaşımlarını kolaylaştıralım, ödüllendirelim, gündem haline getirelim. <a href="https://twitter.com/sametsahinnet">Samet Şahin</a>, 16 yaşında açık bularak para kazananlara çok güzel bir örnek.</p><p>Farklı alanlarda sorunlar yaşayalım yaşadığımız sorunlara çözümler geliştirelim. Biraz daha özgür düşünelim, fark yaratalım şaşırtalım şok edelim korkmadan üretebilelim. En ufak bir fikirsel baskı olmasın karşı çıkalım çünkü farklılıklara saygı gösterilmeyen bir yerde hiç bir inovasyon gelişemez.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sn_vYTavo_KkPwgN" /></figure><p>Arabamızla, evimizle hava atmayı bırakıp yeni bir çiğköfteci-lokmacı açıp batmayı bırakıp geliştiren üreten insanlara yatırım yapalım. 100 bin lirayı bu demode işlere yatıracağımıza uygulama geliştiren arkadaşa yatıralım ortak olalım. Mini yatırılımlar yapıp kitle desteği sağlayan dolandırılmayı minimize edecek web platformları oluşturalım.</p><p>Ev hanımları için Amazon’un Mechanical Turk’u gibi sistemler kuralım eğitelim evden de çalışabilecekleri işler sunalım. Mahkumlara, işsizlere, öğrencilere; veri etiketleme, sınıflandırma öğretelim yapay zeka çağının gereksinimi olan büyük veri ihtiyacını sağlayalım dünyaya satalım.</p><p>Gördüğüm potansiyel engeller şunlar:</p><ul><li>İngilizce</li><li>Programlama</li><li>Temel algoritma eğitimi</li><li>Founder-indie hacker mindseti</li><li>Gerekli donanım</li><li>Pazarlama ve growth desteği</li><li>Motivasyon</li><li>Vergilendirme</li><li>Topluluk</li></ul><p>Yazılım öğrenilir, öğrenemeyen nocode araçları (<a href="https://airtable.com/">Airtable</a>, <a href="https://webflow.com/">Webflow</a>, <a href="https://zapier.com/">Zapier</a>, <a href="https://buildfire.com/">Buildfire</a>, <a href="https://www.appypie.com/">Appypie</a>, <a href="https://www.glideapps.com/">GlideApps</a>) kullanır. Gerekli mentaliteyi örnek öncü kişiler ziyaretler yaparak anlatırlar. Donanım için maddi destek bulunur gerekirse her şehre paylaşımlı hacker spaceler oluşturulur.</p><p>Topluluk olarak farkettiğim güzel topluluklar (benim ilgi alanımdakiler, yorum yazarsanız bilmediklerimi de eklerim) ortaya çıkmaya başladı. Örnek olarak şunları verebilirim: <a href="https://linktr.ee/producthunttr">ProductHuntTR</a>, <a href="https://www.girisimcimuhabbeti.com/">Girişimci Muhabbeti</a>, <a href="https://www.kodluyoruz.org/">Kodluyoruz</a>, <a href="https://medium.com/mobile-growth-istanbul">Mobile Growth İstanbul</a>, <a href="https://gdgistanbul.com/">GDG Istanbul</a>, <a href="http://www.nsistanbul.com/">NSIstanbul</a>, <a href="https://kanvas.istanbul/">Kanvas Tasarım Topluluğu</a>, <a href="https://inzva.com/">Inzva</a>…</p><p>Engeller yaz yaz bitmez ama her bir engel de başlayarak aşılır yeter ki kültür oluşsun örnekler çıksın bunlar konuşulsun gündem olsun.</p><p>Aşılamayacak kadar büyük bir engel yok önümüzde. Birbirimizi aşağı çekmektense yukarı çekelim, hep beraber daha iyiye doğru yol alalım. Herkes fikirlerini ne yapabileceğini yazsın ve ufak ufak adım atmaya başlayalım. Neden yapılamayacağını da yazsın, konuşalım tartışalım. Herkes bir ucundan tutsun.</p><p>Çok naif mi düşünüyorum, çok şey mi istiyorum?</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=be0a3edba91d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/bili%C5%9Fim-hareketi/bu-ay%C4%B1n-producthunt%C4%B1-mardinden-%C3%A7%C4%B1kt%C4%B1-be0a3edba91d">Bu ayın ProductHunt’ı Mardinden Çıktı</a> was originally published in <a href="https://medium.com/bili%C5%9Fim-hareketi">Bilişim Hareketi</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[4 Techniques You Must Know for Natural Language Processing on iOS]]></title>
            <link>https://heartbeat.comet.ml/4-techniques-you-must-know-for-natural-language-processing-on-ios-7bfcd5da9d20?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/7bfcd5da9d20</guid>
            <category><![CDATA[ios-app-development]]></category>
            <category><![CDATA[mobile-ml]]></category>
            <category><![CDATA[heartbeat]]></category>
            <category><![CDATA[nlp]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Wed, 11 Sep 2019 14:07:20 GMT</pubDate>
            <atom:updated>2021-10-07T17:38:56.543Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wRntXwbpUAiIo3ohtBDwlg.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@sebastian123?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Pereanu Sebastian</a> on <a href="https://unsplash.com/search/photos/typewriter?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>iOS’s <a href="https://developer.apple.com/documentation/naturallanguage">Natural Language</a> framework allows us to analyze language and to perform language-specific tasks like script identification, tokenization, lemmatization, part-of-speech tagging, and <a href="https://heartbeat.comet.ml/natural-language-in-ios-12-customizing-tag-schemes-and-named-entity-recognition-caf2da388a9f">named entity recognition</a>.</p><p>In this introduction tutorial, we will discover this framework’s capabilities by looking at 4 common and essential techniques:</p><p>🔹 Tokenization</p><p>🔹 Language Identification</p><p>🔹 Part-of-speech Tagging</p><p>🔹 Identifying People, Places, and Organizations in Text</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6JfdKmcv66S9dLHw.png" /><figcaption><a href="https://developer.apple.com/videos/play/wwdc2018/713">https://developer.apple.com/videos/play/wwdc2018/713</a></figcaption></figure><h3>1) Tokenization</h3><p>Before we can actually perform natural language processing on a text, we need to apply some pre-processing to make the data more understandable for computers. Usually, we need to split the words to process the text and remove any punctuation marks.</p><p>Apple provides NLTokenizer to enumerate the words, so there’s no need to manually parse spaces between words. Also, some languages like Chinese and Japanese don’t use spaces to delimiter words—luckily, NLTokenizer handles these edge cases for you. For the all <a href="https://developer.apple.com/documentation/naturallanguage/nllanguage">supported languages</a>, NLTokenizer can find the semantic units in a given text.</p><p>The sample below shows how to use NLTokenizer to enumerate the words in a sentence. NLTokenizer takes a unit parameter, which is type NLTokenUnit. This specifies the type of text we’re providing as input. It has four types: word, sentence, paragraph, and document. We can run the sample codes on Create ML to check the results easily.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*itWWmwEDhllZPp2IVs-tDA.png" /></figure><p>It enumerates the words as seen below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/256/1*F67pr5VW0tf7C4Gkf681Ow.png" /></figure><h3>2) Language Identification</h3><p>We can detect the language of a given text by using the <a href="https://developer.apple.com/documentation/naturallanguage/nllanguagerecognizer">NLLanguageRecognizer</a> class. It supports <a href="https://developer.apple.com/documentation/naturallanguage/nllanguage">57 languages</a>.</p><p>We can use it as shown below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c51ce179eeedc643708fcc8168c2dda9/href">https://medium.com/media/c51ce179eeedc643708fcc8168c2dda9/href</a></iframe><p>dominantLanguage shows the predicted language that has the highest accuracy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/682/0*jm9940elySdI_hja.png" /></figure><p>To see the other languages’ possibilities, we use the languageHypotheses function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*dirHXa5nTVV_WDdr.png" /></figure><h3>3) Part-of-speech Tagging</h3><p>To understand language better, we need to identify the words and their functions in a given sentence. Part-of-speech tagging allows us to classify nouns, verbs, adjectives, and other parts of speech in a string. Apple provides a linguistic tagger that analyzes natural language text called <a href="https://developer.apple.com/documentation/naturallanguage/nltagger">NLTagger</a>.</p><p>The code sample below shows how to detect the tags of the words by using <a href="https://developer.apple.com/documentation/naturallanguage/nltagger">NLTagger</a>. <a href="https://developer.apple.com/documentation/naturallanguage/nltagscheme/2976610-lexicalclass">Lexical class</a> is a scheme that classifies tokens according to class: part of speech, type of punctuation, or whitespace. We use this scheme and print each word’s type:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7ecc5c3e22db3160f402ce2eb5b8e5c6/href">https://medium.com/media/7ecc5c3e22db3160f402ce2eb5b8e5c6/href</a></iframe><p>As you can see below, it successfully determines the types of words:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/338/1*jbF-1yBiTFTbfQ33AxGOlw.png" /></figure><p>When using NLTagger, depending on the type that you want to detect, you can specify one or more tag scheme (<a href="https://developer.apple.com/documentation/naturallanguage/nltagscheme">NLTagScheme</a>) as a parameter. For example, the tokenType scheme classifies words, punctuation, and spaces; and the lexicalClass scheme classifies word types, punctuation types, and spaces.</p><p>While enumerating the tags, you can skip the specific types (eg. by setting the options parameter. In the code shown above, it skips the punctuation and spaces settings options to [.omitPunctuation, .omitWhitespace].</p><p>NLTagger can detect all of these lexical classes:</p><blockquote>noun, verb, adjective, adverb, pronoun, determiner, particle, preposition, number, conjunction, interjection, classifier, idiom, otherWord, sentenceTerminator, openQuote, closeQuote, openParenthesis, closeParenthesis, wordJoiner, dash, otherPunctuation, paragraphBreak, otherWhitespace.</blockquote><h3>4) Identifying People, Places, and Organizations</h3><p>NLTagger also makes it very easy to detect people’s names, places, and organization names in a given text.</p><p>Finding this type of data in text-based apps opens new ways to deliver information to users. For example, you can create an app that can automatically summarize the text by showing how many times these names (people, places and organizations) are referred to in that text (via blog, news article, etc.).</p><p>Let’s see how we can detect these names in a sample sentence:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d8ff537757d87ebf1bf76d286b2b5595/href">https://medium.com/media/d8ff537757d87ebf1bf76d286b2b5595/href</a></iframe><p>Here we user NLTagger again, but this time we set another option called joinNames, which concatenates names and surnames. To filter personal names, places, and organizations, we create an NLTag array.</p><p>The tags of the words that NLTagger can find are shown below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/550/1*gv2J39DggwTLzZ0x1NEZgA.png" /></figure><p>As you can see above, we can deduce specific knowledge from text using iOS’s Natural Language framework.</p><h3>Recap</h3><p>We learned <strong>4 powerful techniques</strong> to process text in our apps:</p><p>🔹 Tokenization for supported languages.</p><p>🔹 Detecting the language of a given text.</p><p>🔹 Making lexical analyze by detecting nouns, adjectives, verbs etc.</p><p>🔹 Finding the mentioned people, places, and organizations in a given text.</p><p>It’s all on-device, so user data stays private, and it can work without an internet connection. We can analyze the text and highlight person names, places, and organizations.</p><p>If you want to learn how to train <strong>custom text classification</strong> models on <strong>Create ML</strong> you can check <a href="https://heartbeat.comet.ml/text-classification-on-ios-using-create-ml-f71d7191404a">my previous blog post</a>.</p><p>Thanks for reading!</p><p>If you liked this story, you can follow me on <a href="https://medium.com/@ozgurs">Medium</a> and <a href="https://twitter.com/ozgr_shn">Twitter</a>. If you have any question or app idea to discuss, don’t hesitate to contact me via <a href="mailto:ozgur.sahin@aol.com">e-mail</a>.</p><p><em>Editor’s Note: </em><a href="https://heartbeat.comet.ml/"><em>Heartbeat</em></a><em> is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.</em></p><p><em>Editorially independent, Heartbeat is sponsored and published by </em><a href="http://comet.ml/?utm_campaign=heartbeat-statement&amp;utm_source=blog&amp;utm_medium=medium"><em>Comet</em></a><em>, an MLOps platform that enables data scientists &amp; ML teams to track, compare, explain, &amp; optimize their experiments. We pay our contributors, and we don’t sell ads.</em></p><p><em>If you’d like to contribute, head on over to our</em><a href="https://heartbeat.fritz.ai/call-for-contributors-october-2018-update-fee7f5b80f3e"><em> call for contributors</em></a><em>. You can also sign up to receive our weekly newsletters (</em><a href="https://www.deeplearningweekly.com/"><em>Deep Learning Weekly</em></a><em> and the </em><a href="https://info.comet.ml/newsletter-signup/"><em>Comet Newsletter</em></a><em>), join us on</em><a href="https://join.slack.com/t/fritz-ai-community/shared_invite/enQtNTY5NDM2MTQwMTgwLWU4ZDEwNTAxYWE2YjIxZDllMTcxMWE4MGFhNDk5Y2QwNTcxYzEyNWZmZWEwMzE4NTFkOWY2NTM0OGQwYjM5Y2U"><em> </em></a><a href="https://join.slack.com/t/cometml/shared_invite/zt-49v4zxxz-qHcTeyrMEzqZc5lQb9hgvw"><em>Slack</em></a><em>, and follow Comet on </em><a href="https://twitter.com/Cometml"><em>Twitter</em></a><em> and </em><a href="https://www.linkedin.com/company/comet-ml/"><em>LinkedIn</em></a><em> for resources, events, and much more that will help you build better ML models, faster.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7bfcd5da9d20" width="1" height="1" alt=""><hr><p><a href="https://heartbeat.comet.ml/4-techniques-you-must-know-for-natural-language-processing-on-ios-7bfcd5da9d20">4 Techniques You Must Know for Natural Language Processing on iOS</a> was originally published in <a href="https://heartbeat.comet.ml">Heartbeat</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Text Classification on iOS Using Create ML]]></title>
            <link>https://heartbeat.comet.ml/text-classification-on-ios-using-create-ml-f71d7191404a?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/f71d7191404a</guid>
            <category><![CDATA[nlp]]></category>
            <category><![CDATA[heartbeat]]></category>
            <category><![CDATA[coreml]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[mobile-ml]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Mon, 19 Aug 2019 14:04:33 GMT</pubDate>
            <atom:updated>2021-09-30T16:29:37.129Z</atom:updated>
            <content:encoded><![CDATA[<h4>Classifying news categories in an iOS app using machine learning</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YimKbmU0y643c7eeujX3cg.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@freegraphictoday?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">AbsolutVision</a> on <a href="https://unsplash.com/search/photos/news?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>Apple uses <a href="https://heartbeat.comet.ml/the-7-nlp-techniques-that-will-change-how-you-communicate-in-the-future-part-i-f0114b2f0497">natural language processing techniques</a> in many ways on iOS. Thanks to NLP, iOS can auto-fix typos, and Siri can understand what we’re saying. At WWDC 2018, Apple brought these capabilities to developers via a tool called <a href="https://developer.apple.com/documentation/createml/">Create ML</a>. This tool has enabled developers to easily create text classification models (among numerous other kinds of models).</p><p>In this tutorial, we’ll dive into these frameworks to train a machine learning model in Create ML and develop a news classifier app.</p><p><a href="https://developer.apple.com/documentation/naturallanguage">Natural Language</a></p><p>Text classification helps us to take advantage of NLP techniques to categorize texts. With text classification, we can <a href="https://heartbeat.comet.ml/using-transfer-learning-and-pre-trained-language-models-to-classify-spam-549fc0f56c20">detect spam messages</a>, <a href="https://heartbeat.comet.ml/training-a-sentiment-analysis-core-ml-model-28823b21322c">analyze the sentiment</a> of the tweets (positive, negative, neutral), and even categorize <a href="https://www.defects.ai/">GitHub issues</a>.</p><p>In order to create a classification model with Create ML, the only thing we need the labeled text data. This opens many doors for developers. We can detect the author an article, find a company’s best- and worst-reviewed products, and even detect various entities (person names, locations, organizations, etc.) in a given text. This is only limited by your imagination and data gathering techniques. In other words, the sky is the limit.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/838/1*6udxAJafdLQ4UCfwskvU3w.png" /><figcaption><a href="https://developer.apple.com/documentation/createml/creating_a_text_classifier_model">https://developer.apple.com/documentation/createml/creating_a_text_classifier_model</a></figcaption></figure><p>In the software and machine learning world, it’s <em>very</em> important to learn and try the latest tools. If you don’t know how to use these productivity tools, you may waste a lot of time.</p><p>In this tutorial, we’ll take a look at the power of Create ML. Create ML became available in<strong> </strong>Xcode Playgrounds with MacOS 10.14.</p><h3>Getting Started</h3><p>Open Xcode and create a blank <strong>macOS</strong> <strong>playground</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7yYI1z1snCmFGDuA1ByyRA.png" /></figure><p>Import the necessary frameworks (CreateML and Foundation).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ph_TH1FA5lc6vH_f-QXSsg.png" /></figure><h4>Gather the Text Data</h4><p>I will be using the <a href="https://storage.googleapis.com/dataset-uploader/bbc/bbc-text.csv">categorized text dataset</a> from BBC news.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-dVqxU13RUYVtEfc0pHatw.png" /></figure><p>This dataset consists of 2245 news articles, with each article belonging to one of 5 text categories: sport, business, politics, tech, or entertainment.</p><p>You can download the dataset in CSV format <a href="https://storage.googleapis.com/dataset-uploader/bbc/bbc-text.csv">from this link</a>. After downloading the file, pass the file path as a parameter to the URL object. Instead of writing this code, you can just drag-and-drop the file into the playground to create the file path.</p><p>Create ML can read data in two ways: using folders as labels when files are separated in folders, or by reading from a single file (CSV, JSON). Here, we create a URL object to point to the CSV file, as seen below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*i1vRAMFAh9thp_ouviFaYA.png" /></figure><p>To read the data from the file we create an MLDataTable. MLDataTable is a structure in Create ML that simplifies the loading and processing of text and tabular data. If you want to learn more about MLDataTable, check out <a href="https://heartbeat.comet.ml/working-with-create-mls-mldatatable-to-pre-process-non-image-data-424f916a093e">this tutorial</a>.</p><p>Next, we create a model using MLTextClassifier to classify natural language text. We guide the MLTextClassifier by specifying the text column and the label column (the news category).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kVFUz4apn48VYBPAaltBFQ.png" /></figure><p>This model learns to associate labels with features of the input text, which can be sentences, paragraphs, or even entire documents.</p><h4>MLTextClassifier</h4><p>Apple provides a model trainer called <a href="https://developer.apple.com/documentation/createml/mltextclassifier"><strong>MLTextClassifier</strong></a><strong>. </strong>It supports <a href="https://developer.apple.com/documentation/naturallanguage/nllanguage">57 languages</a>. The model works in a supervised way, which means your training data needs to be labeled (in this case the news text and the category of the news).</p><p>The model starts training when you run Create ML by clicking the play button.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*G98_oTdiuQkune84Fe4gZA.png" /></figure><p>It’s enough to run this method to train a simple model, but if you want to customize your models, Apple provides some parameters for this. Let’s say we have text data that&#39;s written in another language. Here, we can specify the language model. If we want to change the algorithm, two algorithms are supported: <a href="http://blog.datumbox.com/machine-learning-tutorial-the-max-entropy-text-classifier/">maximum entropy</a> and <a href="https://www.wikizero.com/en/Conditional_random_field">conditional random field</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*E7tyeoGN_iR_cznzPTHIyQ.png" /></figure><p>In case these parameters are not enough for your customization needs, Apple suggests using a Python framework called <a href="https://github.com/apple/turicreate"><strong>Turi Create</strong></a><strong>. </strong>In August 2016, Apple acquired Turi, a machine learning software startup, and open-sourced Turi Create.</p><h4>Training the Machine Learning Model</h4><p>After model training is complete, the result can be seen by clicking the show result button.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FSZwuMGQctl38LlAqnSIzg.png" /></figure><p>As a default, Create ML separates 5% of the data for the validation set. This set isn’t included in training data so that we an test the model on data it hasn’t seen before. Training stops automatically when the accuracy is high enough.</p><p>While the model is being trained, Create ML shows the current process in the panel below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QirFdqWPQ7E97rD3DZDOTA.png" /></figure><p>It takes only <strong>0.77 seconds </strong>to train for two iterations.</p><h4>Behind the Scenes of Training the MLTextClassifier</h4><p>MLTextClassifier automatically pre-processes the text data. In this stage, punctuation is cleared and <a href="https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html">tokenization</a> is performed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1014/1*2FHp-e1opxL-J0wXnJGWaw.png" /><figcaption><a href="https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html">https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html</a></figcaption></figure><p>Apple doesn’t provide the details of how features are engineered, but if it uses Turi Create behind the scenes, then it has two stages: feature engineering and statistical modeling. Turi Create’s <a href="https://apple.github.io/turicreate/docs/api/generated/turicreate.text_classifier.create.html#turicreate.text_classifier.create">text classifier</a> is a <a href="https://apple.github.io/turicreate/docs/api/generated/turicreate.logistic_classifier.LogisticClassifier.html#turicreate.logistic_classifier.LogisticClassifier">LogisticClassifier</a> model trained using a <a href="https://apple.github.io/turicreate/docs/userguide/text/analysis.html">bag-of-words</a> representation of the text dataset. In the first stage, it creates this representation, which shows the frequencies of the words.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/346/1*bfWICQ6LocqZfk8QmYsSRw.png" /><figcaption><a href="https://apple.github.io/turicreate/docs/userguide/text/analysis.html">https://apple.github.io/turicreate/docs/userguide/text/analysis.html</a></figcaption></figure><p>In the next stage, a logistic regression (<a href="https://www.wikizero.com/en/Multinomial_logistic_regression"><strong>multinomial logistic regression</strong></a>) model is trained by using the features. This method calculates the possibilities of classes by creating a linear combination of these features. Basically, the logistic classifier calculates the contribution of each word to the specified classes.</p><h4>How Good Is Our Model?</h4><p>To see the details of the trained model, we can print it. This way, we can check its performance on both training and validation datasets. It shows our model has successfully classified all the validation data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*jr7zpNrnXPMMIVBsTvApUQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/598/1*jylTl4hM75nqiBa50QZ_SQ.png" /></figure><p>Let’s try the model with arbitrary inputs and examine its performance. The text classifier has a prediction method to make the classification. We can see the prediction result directly in the side panel.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3bGgc8ZVRAGhtchIob3Isw.png" /></figure><p>If this model works for us, then we can save it and use it in an iOS app. To provide the model details, we create model metadata. This explanation is shown in Xcode when we open the model.</p><p>We save the model by giving the path to the write function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I2bVebzDHkLuRqFtlsrLsQ.png" /></figure><p>Since we’ve exported our model, we’re now ready to use it in an iOS app. Create a SingleViewApp and drag-and-drop your Core ML model into your project.</p><p>Click on the model and check its details. The model is only <strong>808kb.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EJLaLbyyxymWzmn6gCZzbA.png" /></figure><p>When we drag-and-drop the model into the project, Xcode <strong>automatically</strong> creates a class for the model. If you want to see the created code, click on the arrow below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/868/1*BdbIsWZRgD6_c_VOQxnq_w.png" /></figure><p>The class is shown below. This class has the functions needed to load the model and make predictions.</p><blockquote>Note: This class should not be edited.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y0CR6Yy-utPFI0sbZUoRzQ.png" /></figure><p>To use the model, create an instance of the model class in ViewController.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/588/1*xOIFlFw-ANpVlvTz4zxZLw.png" /></figure><p>We call the prediction function to run our model with the custom input. To show the result, I created a basic textField and gave its text as a parameter to this function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/704/1*ugWgvce0dgThJrvsC9qy1w.png" /></figure><p>I added a simple button to call this prediction function. You can find the whole project on <a href="https://github.com/ozgurshn/NewsClassifier">GitHub</a>.</p><p>And that’s it! Our news classifier app is ready.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/426/1*dgU2xI6jNHg5TVQ1cWcbog.gif" /></figure><p>Thanks for reading!</p><p>If you liked this story, you can follow me on <a href="https://medium.com/@ozgurs">Medium</a> and <a href="https://twitter.com/ozgr_shn">Twitter</a>. You can contact me via <a href="mailto:ozgur.sahin@aol.com">e-mail</a>.</p><p><em>Editor’s Note: </em><a href="https://heartbeat.comet.ml/"><em>Heartbeat</em></a><em> is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.</em></p><p><em>Editorially independent, Heartbeat is sponsored and published by </em><a href="http://comet.ml/?utm_campaign=heartbeat-statement&amp;utm_source=blog&amp;utm_medium=medium"><em>Comet</em></a><em>, an MLOps platform that enables data scientists &amp; ML teams to track, compare, explain, &amp; optimize their experiments. We pay our contributors, and we don’t sell ads.</em></p><p><em>If you’d like to contribute, head on over to our</em><a href="https://heartbeat.fritz.ai/call-for-contributors-october-2018-update-fee7f5b80f3e"><em> call for contributors</em></a><em>. You can also sign up to receive our weekly newsletters (</em><a href="https://www.deeplearningweekly.com/"><em>Deep Learning Weekly</em></a><em> and the </em><a href="https://info.comet.ml/newsletter-signup/"><em>Comet Newsletter</em></a><em>), join us on</em><a href="https://join.slack.com/t/fritz-ai-community/shared_invite/enQtNTY5NDM2MTQwMTgwLWU4ZDEwNTAxYWE2YjIxZDllMTcxMWE4MGFhNDk5Y2QwNTcxYzEyNWZmZWEwMzE4NTFkOWY2NTM0OGQwYjM5Y2U"><em> </em></a><a href="https://join.slack.com/t/cometml/shared_invite/zt-49v4zxxz-qHcTeyrMEzqZc5lQb9hgvw"><em>Slack</em></a><em>, and follow Comet on </em><a href="https://twitter.com/Cometml"><em>Twitter</em></a><em> and </em><a href="https://www.linkedin.com/company/comet-ml/"><em>LinkedIn</em></a><em> for resources, events, and much more that will help you build better ML models, faster.</em></p><h3>Resources</h3><ul><li><a href="https://apple.github.io/turicreate/docs/api/generated/turicreate.text_classifier.create.html#turicreate.text_classifier.create">turicreate.text_classifier.create - Turi Create API 5.6 documentation</a></li><li><a href="https://apple.github.io/turicreate/docs/userguide/text_classifier/">Text classifier · GitBook</a></li><li><a href="https://apple.github.io/turicreate/docs/api/generated/turicreate.logistic_classifier.LogisticClassifier.html">turicreate.logistic_classifier.LogisticClassifier - Turi Create API 5.2 documentation</a></li><li><a href="https://developer.apple.com/documentation/createml/creating_a_text_classifier_model">Creating a Text Classifier Model</a></li><li><a href="https://www.wikizero.com/en/Multinomial_logistic_regression">WikiZero - Multinomial logistic regression</a></li><li><a href="https://apple.github.io/turicreate/docs/userguide/text/analysis.html">Text data · GitBook</a></li><li><a href="https://www.wikizero.com/en/Conditional_random_field">WikiZero - Conditional random field</a></li><li><a href="http://blog.datumbox.com/machine-learning-tutorial-the-max-entropy-text-classifier/">Machine Learning Tutorial: The Max Entropy Text Classifier</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f71d7191404a" width="1" height="1" alt=""><hr><p><a href="https://heartbeat.comet.ml/text-classification-on-ios-using-create-ml-f71d7191404a">Text Classification on iOS Using Create ML</a> was originally published in <a href="https://heartbeat.comet.ml">Heartbeat</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Capture the Best Frame in an iOS Image Processing App]]></title>
            <link>https://heartbeat.comet.ml/how-to-capture-the-best-frame-in-an-ios-image-processing-app-5a14829a03f1?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/5a14829a03f1</guid>
            <category><![CDATA[ios-app-development]]></category>
            <category><![CDATA[vision]]></category>
            <category><![CDATA[heartbeat]]></category>
            <category><![CDATA[coreml]]></category>
            <category><![CDATA[mobile-ml]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Fri, 07 Jun 2019 14:27:45 GMT</pubDate>
            <atom:updated>2021-10-07T14:08:19.963Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y3M6tUA0wpTKysl4MkJb3A.jpeg" /></figure><p>If you’ve ever developed an iOS Vision app that process frames of a video buffer, you know that you need to be careful with your resources. You shouldn&#39;t process each frame—i.e., where the user just moves the camera around.</p><p>In order to classify an image with high accuracy, you’ll need to capture a stable scene. This is crucial for apps that use Vision. In this tutorial, I’ll be diving into this problem and the solution Apple suggests.</p><p>Basically, there are two options in capturing an image:</p><ol><li>Let the user capture the image with a button.</li><li>Capture the best scene (frame) programmatically from the continuous video buffer.</li></ol><p>With the first option, you let the user capture the stable scene. Remember the infamous <a href="https://youtu.be/vIci3C4JkL0?t=67">hot dog app</a> from Silicon Valley—in this app, the user takes the picture with a button. Capturing the best scene is left in the users’ hands.</p><p>In the second option, your app should take care of capturing the best scene whenever the user is moving the camera around.</p><p>That’s where we start begging:</p><blockquote>Me: So, God of Vision! Tell me when the user is holding the phone still.</blockquote><blockquote>God of Vision: Use registration, my child! I provide it via the Vision framework.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*5VFvRyIVcsPyJ0HK.jpg" /></figure><h3>What Is Image Registration?</h3><p><strong>Image registration</strong> is the determination of a geometrical transformation that aligns points in one view of an object with corresponding points in another view of that object (or another object).</p><p>Registration works like this: we’re aligning two images with each other, and the algorithm tells you, “Okay, if you shift it by this many pixels, this is how they would actually match.”</p><p>As Frank Doepke stated at WWDC 18, this is a pretty cheap and fast algorithm, and it will tell me if I hold the camera still or if anything is moving in front of the camera. Vision apps could hypothetically make a classification request on every frame buffer, but classification is a computationally expensive operation — so attempting this could result in delays and poor performance with the UI.</p><p>So Apple suggests classifying the scene in a frame only if the registration algorithm determines that the scene and camera are still, indicating the user’s intent to classify an object.</p><h3>How To Measure Relative Distance Between Images?</h3><p><a href="https://developer.apple.com/documentation/vision/vntranslationalimageregistrationrequest">VNTranslationalImageRegistrationRequest</a> allows developers to check whether the current image from a video buffer is worth spending Vision resources. iOS camera-accessing apps use the captureOutput:didOutputSampleBuffer:fromConnection:<em> </em>delegate<em> </em>method in order to process video frames.</p><p>In this delegate method, we’ll call the registration request, as shown in the code below. This request is an image analysis request that determines the affine transform needed to align the content of two images:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e0a6bacad20ce2b4c2f558584cd8cc27/href">https://medium.com/media/e0a6bacad20ce2b4c2f558584cd8cc27/href</a></iframe><p>Here we use sequenceRequestHandler (<a href="https://developer.apple.com/documentation/vision/vnsequencerequesthandler">VNSequenceRequestHandler</a>) with <a href="https://developer.apple.com/documentation/vision/vntranslationalimageregistrationrequest">VNTranslationalImageRegistrationRequest</a> objects to compare consecutive frames, keeping a history of the last 15 frames.<a href="https://developer.apple.com/documentation/vision/vnsequencerequesthandler">VNSequenceRequestHandler</a> is an object that processes image analysis requests for each frame in a sequence (15 frames in this case).</p><p>This algorithm accepts a scene as stable if the Manhattan distance between frames is less than 20:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3f81a131bb0825131330e9680b2c1bc4/href">https://medium.com/media/3f81a131bb0825131330e9680b2c1bc4/href</a></iframe><p>So what is the Manhattan distance? For example, if 𝑥=(𝑎,𝑏) and 𝑦=(𝑐,𝑑) the Manhattan distance between x and y is |𝑎−𝑐|+|𝑏−𝑑|.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/830/1*s3y9kPr_8FuzMsaoNS1fvQ.png" /><figcaption><a href="http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Clustering_Parameters/Manhattan_Distance_Metric.htm">http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Clustering_Parameters/Manhattan_Distance_Metric.htm</a></figcaption></figure><p>After checking the <em>Manhattan</em> distance, we check the results of the image registration request. It takes the result of a request as alignmentObservation.alignmentTransform to determine if the scene is stable enough to perform classification. The RecordTransposition function adds data to the transpositionHistoryPoints stack to record the results of the last 15 frames.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a3bbba1bbd868b128b02c848e92d6c96/href">https://medium.com/media/a3bbba1bbd868b128b02c848e92d6c96/href</a></iframe><p>The SceneStabilityAchieved function controls the results of the last 15 frames to detect stability. If we achieve scene stability for these frames, then we can analyze this frame and pass it to <a href="https://heartbeat.comet.ml/how-to-fine-tune-resnet-in-keras-and-use-it-in-an-ios-app-via-core-ml-ee7fd84c1b26">Core ML for the best classification results</a>.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7a034f4db87c94b620743320296cd0dc/href">https://medium.com/media/7a034f4db87c94b620743320296cd0dc/href</a></iframe><h3>Conclusion</h3><p>In this tutorial, we learned how to use image registration and measure the Manhattan distance between images in order to achieve scene stability. This will allow our Core ML Vision apps to work better and use fewer resources.</p><p>Find the full code of the view controller <a href="https://gist.github.com/ozgurshn/e6eeb69238be0ba21582b61f1dfe1b87">here</a>.</p><p>Thanks for reading!</p><p>If you liked this story, you can follow me on <a href="https://medium.com/@ozgurs">Medium</a> and <a href="https://twitter.com/ozgr_shn">Twitter</a>. You can contact me via <a href="mailto:ozgur.sahin@aol.com">e-mail</a>.</p><h3>Resources</h3><ul><li><a href="https://developer.apple.com/documentation/vision/training_a_create_ml_model_to_classify_flowers">Training a Create ML Model to Classify Flowers</a></li><li><a href="https://developer.apple.com/videos/play/wwdc2018/717/">Vision with Core ML - WWDC 2018 - Videos - Apple Developer</a></li></ul><p>And more on image registration from <a href="https://www.semanticscholar.org/paper/CHAPTER-8-Image-Registration-Fitzpatrick/c723ebf73d5a4deb3e9a901450307dea92a8839d">Fitzpatrick, Hill, and Maurer</a>.</p><p><em>Editor’s Note: </em><a href="https://heartbeat.comet.ml/"><em>Heartbeat</em></a><em> is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.</em></p><p><em>Editorially independent, Heartbeat is sponsored and published by </em><a href="http://comet.ml/?utm_campaign=heartbeat-statement&amp;utm_source=blog&amp;utm_medium=medium"><em>Comet</em></a><em>, an MLOps platform that enables data scientists &amp; ML teams to track, compare, explain, &amp; optimize their experiments. We pay our contributors, and we don’t sell ads.</em></p><p><em>If you’d like to contribute, head on over to our</em><a href="https://heartbeat.fritz.ai/call-for-contributors-october-2018-update-fee7f5b80f3e"><em> call for contributors</em></a><em>. You can also sign up to receive our weekly newsletters (</em><a href="https://www.deeplearningweekly.com/"><em>Deep Learning Weekly</em></a><em> and the </em><a href="https://info.comet.ml/newsletter-signup/"><em>Comet Newsletter</em></a><em>), join us on</em><a href="https://join.slack.com/t/fritz-ai-community/shared_invite/enQtNTY5NDM2MTQwMTgwLWU4ZDEwNTAxYWE2YjIxZDllMTcxMWE4MGFhNDk5Y2QwNTcxYzEyNWZmZWEwMzE4NTFkOWY2NTM0OGQwYjM5Y2U"><em> </em></a><a href="https://join.slack.com/t/cometml/shared_invite/zt-49v4zxxz-qHcTeyrMEzqZc5lQb9hgvw"><em>Slack</em></a><em>, and follow Comet on </em><a href="https://twitter.com/Cometml"><em>Twitter</em></a><em> and </em><a href="https://www.linkedin.com/company/comet-ml/"><em>LinkedIn</em></a><em> for resources, events, and much more that will help you build better ML models, faster.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5a14829a03f1" width="1" height="1" alt=""><hr><p><a href="https://heartbeat.comet.ml/how-to-capture-the-best-frame-in-an-ios-image-processing-app-5a14829a03f1">How to Capture the Best Frame in an iOS Image Processing App</a> was originally published in <a href="https://heartbeat.comet.ml">Heartbeat</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Atatürk Videosunu Nasıl Renklendirdim?]]></title>
            <link>https://medium.com/bili%C5%9Fim-hareketi/atat%C3%BCrk-videosunu-nas%C4%B1l-renklendirdim-12f5aa89ec93?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/12f5aa89ec93</guid>
            <category><![CDATA[renkli-atatürk-video]]></category>
            <category><![CDATA[atatürk]]></category>
            <category><![CDATA[derin-ogrenme]]></category>
            <category><![CDATA[yapay-zeka]]></category>
            <category><![CDATA[renklendirilmiş-atatürk]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Sat, 01 Jun 2019 10:07:00 GMT</pubDate>
            <atom:updated>2020-03-04T21:05:16.149Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YOEl1v2oG_yIzUqGiO4mUQ.png" /></figure><p>Son günlerde sosyal medyada viral olarak yayılan Atatürk videosunu 19 Mayıs 2019 dolayısıyla ben renklendirdim. Videonun viral olarak yayılmasını, benden habersiz bir şekilde videoyu kopyalayarak kendi Youtube kanalına yükleyen ve kendi kanal adını videoya yapıştırıp <a href="https://eksisozluk.com/entry/90502612">ekşisözlük’te</a> paylaşan kişiler sağladı. İlk başta kendisine ne kadar kızsam da sonradan o olmasa videonun bu kadar hızlı yayılmacağını düşündüğümden bu duyguları bir kenara bıraktım. Bu süreçte NTV’den Yağız Şenkal bey sağolsun bana ulaştı ve <a href="https://www.ntv.com.tr/video/teknoloji/ntvye-ozel-renkli-ataturk-videosu,ONR2LM28RUS6f2uF0pZ68A">röportaj</a> yaptık.</p><h3>Özgür Şahin on Twitter</h3><p>Atatürk&#39;ü Anma, Gençlik ve Spor Bayramı&#39;mız kutlu olsun. Derin öğrenme kullanarak siyah beyaz resimleri ve videoları renklendirebiliyoruz. Bugünün şerefine siyah beyaz bir videoyu bu şekilde renklendirdim. https://t.co/jGQrIFVyGp</p><h4>Renklendirme Neden Yapıldı?</h4><p>Kai Fuu Lee’nin yazdığı <a href="https://www.amazon.com/AI-Superpowers-China-Silicon-Valley-ebook/dp/B0795DNWCF">AI Superpowers</a> kitabında yapay zekanın dünyayı nasıl etkileceğinden bahsediyor. Orada paylaştığı beni aydınlatan bir anekdot var.</p><blockquote>2016’da Çin’de yapay zeka, yazarın ‘Sputnik anı’ olarak nitelendirdiği bir an yaşıyor. AlphaGo isimli yapay zeka programı efsanevi Go şampiyonu Lee Sedol’u üç oyunluk bir turnuvada yeniyor. Bu oyunlar Çin’de 280 milyonu TV başına kitliyor ve Lee’nin yenilmesi çoğu insanın kalbini kırıyor. Ama insanlar yenilginin sebep olduğu hayal kırıklığına kapılmak yerine, bunu yapay zekanın gücünü kullanmak için bir ilham haline dönüştürüyorlar. Adeta Amerikalıların aya ilk çıkanlar olmasını tetikleyen Rus uydusu Sputnik’in kalkışı gibi. John F. Kennedy’nin aya iniş yapma niyetini açıkladığı gibi, Çin hükümeti de gelecek on senede yapay zekada global bir lider olma amacını açıklayan bir toparlanma çağrısında bulundu. Çin birkaç sene öncesine kadar inovasyondan çok kopya teknolojiler üreten bir yer gibi göründüğü için bu çağrı çok önemli.</blockquote><p>Türkiye’de de böyle bir inovasyon/teknoloji dönüşümüne dikkat çekilmesi için bizim de böyle bir anıya sahip olmamız gerektiğini düşünüyordum. Yapay zeka algoritmalarını neler yapabileceğini Türkiye’nin çoğunluğuna nasıl gösterebileceğim fikri bayağıdır aklımın bir köşesindeydi. Bu kütüphane ile Atatürk’ün videolarını renklendirme fikri aklıma geldiğinde hemen harekete geçtim.</p><h4>Renklendirme Nasıl Yapıldı?</h4><p>Bu işlemi herkes yapabilir, bu yazıda adım adım nasıl yapıldığını anlatacağım ve ne kadar kolay olduğunu göreceksiniz. Bu videoyu <a href="https://twitter.com/citnaj">Jason Antic</a>’in açık kaynak resim/video iyileştirme ve renklendirme kütüphanesi (framework) <a href="https://github.com/jantic/DeOldify">DeOldify’ı</a> kullanarak yaptım. <strong>Sosyal medyada tüm takdiri ben topladım ama asıl takdir edilmesi gereken </strong><a href="https://twitter.com/citnaj"><strong>Jason Antic</strong></a><strong>’dir. </strong>Kütüphanesini uzun zamandır iOS uygulamaya dönüştürmek için takip ediyordum. Daha önceden sadece resimleri renklendirebilirken yakın zamanda videoları da renklendirebilir hale geldi.</p><p>DeOldify bir derin öğrenme (deep learning) kütüphanesi. Yani veri örnekleri göstererek öğrenen ve daha sonra öğrendiği modeli kullanarak insan müdahalesine gerek kalmadan işlem yapabilen bir makine öğrenmesi modeli. Burada da renkli resimler ve siyah beyaza çevrilmiş halleri ile eğitilerek aradaki bağlantıyı öğrenmesi sağlanmış. Milyonlarca resimle eğitilerek bulutların beyaz, ağaçların yeşil olarak boyanması gerektiğini öğrenmiş ve daha önceden gördüğü örneklere göre resimi renklendiriyor. Atatürk videosunda Türk bayrağını bazen doğru renklendiremiyor çünkü eğitildiği resim datasetinde yeterince Türk bayrağı örneği yok bundan dolayı bunu öğrenmemiş durumda. Daha da iyileştirme yapmak isteyenler bize özgü resimleri de bu veri kümesine ekleyerek öğrenimini daha iyi hale getirebilirler.</p><p>Yapay zeka/derin öğrenme gibi konulara yabancıysanız Türkçe’ye çevirdiğim <a href="https://medium.com/deep-learning-turkiye/makine-%C3%B6%C4%9Frenmesi-e%C4%9Flencelidir-b9d50aad3a62">Makine Öğrenmesi Eğlencelidir </a>yazısını okumanızı şiddetle tavsiye ederim. Okumayı tercih etmiyorsanız <a href="https://medium.com/u/ed38bb866abe">Ayyüce Kızrak</a>’ın <a href="https://www.youtube.com/watch?v=kieF8V-vJfo">Yapay Zeka Ve İnsan El Ele Tedx konuşmasını</a> izleyebilirsiniz. Bu yazıyı okuduğunuzda konuyla ilgili fikir sahibi olacak ve büyük resmi biraz daha iyi anlayacaksınız.</p><h4>Kütüphaneyi Nasıl Kullanıyoruz?</h4><p>Aslında yaptığım tek şey url girmek ve next next tuşlarına basmak:)</p><p><a href="https://colab.research.google.com/github/jantic/DeOldify/blob/master/VideoColorizerColab.ipynb">Şuradan</a> DeOldify’ı kullanmak için hazırlanmış Colab ortamını açın ve Google hesabınıza giriş yapın. Colab online olarak kod çalıştırabildiğimiz ve yapay zeka modelleri eğitebildiğimiz ücretsiz bir platform. Google’ın sunduğu bu platformda güçlü bir ekran kartını (Tesla k80) ücretsiz olarak kullanabiliyor ve Python kodları çalıştırabiliyoruz.</p><p>Burada hücrenin üstündeki açıklamalarda o kod bloğunun ne iş yaptığı anlatılıyor. Soldaki play butonuna basarak o satırları çalıştırıyoruz. Burada çalıştırmadan önce üst menüden Runtime-Change runtime type bölümünden Python 3 ve Gpu seçeneklerinin seçildiğinden emin olun. Aşağıdaki ilk kod bloğunu çalıştırarak işe başlayın.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GOYwV5nfQSGQjpJLrhrPgg.png" /></figure><p>Şöyle bir uyarı çıkacaktır. Google Colab, Google verilerinize erişebilir bundan dolayı böyle bir onay çıkıyor. Run Anyway diyebilirsiniz.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GNV0xAls12y8ik2_l_-BxQ.png" /></figure><p>Ayrıca Colab runtime’ı sıfırlamak için şu uyarı çıkacaktır bunu da onaylayalım.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3pIQw6W9MliX2QZGgwHllA.png" /></figure><p>Çalıştırma anında play butonu etrafında kesik kesik çizgiler döner. Çalışma bittiğinde buton aşağıdaki hali alır.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/560/1*S4M03YiqEPrwm_rftUc3NA.png" /></figure><p>Aşağıdaki satırları da tek tek çalıştıralım. Sol tarafa iki köşeli parantez arasına mouse’u getirdiğinizde play butonu çıkar her kod bloğu için ona basarak ilerleyin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*l3HFyviafbvYaYnIt92cCw.png" /></figure><p>Aşağıdaki kod bloguna kadar üsttekileri tek tek çalıştırın. source_url kısmına renklendirmek istediğiniz video adresini kopyalayın. burada render_factoru artırırsanız daha detaylı bir dönüştürme yapar ve daha uzun sürer. 21 hız/performans açısından bulunmuş optimum değerdir, o şimdilik kalsın. Sonra isterseniz artırıp denersiniz. Url’i girdikten sonra bu bloğu da çalıştırın.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ljiIRICTRtnptOUCvljCGA.png" /></figure><p>Aşağıdaki bölümden videonun işlenme sürecini takip edebilirsiniz.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CgS_JFbp_rtOqgvcIGLwSg.png" /></figure><p>Burası 100% olup play butonu etrafındaki dönen çizgi durana kadar bekleyin.</p><p>İşlem bittikten sonra sol menüdeki Files tabına tıklayın. Aşağıdaki klasör hiyerarşisini takip edin ve dosyalarımız burada. Gelmediyse refresh butonuna basıp tekrar klasöre girin. Dönüştürdüğü videonun hem sesli hem sessiz versiyonunu oluşturuyor. Buradan sağ tıklayıp download diyerek indirebilirsiniz.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/702/1*SFo7QsSeYDHLY-DcTMsTng.png" /></figure><p>Artık siz de bir renklendiricisiniz tebrikler :)</p><p>Diğer renklendirdiğim videolara şu <a href="https://www.youtube.com/playlist?list=PLJ1MO3koDwN2RybPeRNT-CzvLnA52brlx">Youtube playlistden</a> erişebilirsiniz.</p><p>Resimleri renklendirmek isterseniz buradan <a href="https://colab.research.google.com/github/jantic/DeOldify/blob/master/ImageColorizerColab.ipynb">buyurun</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=12f5aa89ec93" width="1" height="1" alt=""><hr><p><a href="https://medium.com/bili%C5%9Fim-hareketi/atat%C3%BCrk-videosunu-nas%C4%B1l-renklendirdim-12f5aa89ec93">Atatürk Videosunu Nasıl Renklendirdim?</a> was originally published in <a href="https://medium.com/bili%C5%9Fim-hareketi">Bilişim Hareketi</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Görüntü İşleyen Bir Uygulamada En İyi Frame’i Nasıl Yakalarız?]]></title>
            <link>https://medium.com/nsistanbul/g%C3%B6r%C3%BCnt%C3%BC-i%CC%87%C5%9Fleyen-bir-uygulamada-en-i%CC%87yi-framei-nas%C4%B1l-yakalar%C4%B1z-65cef43edf0c?source=rss-32c3e42a9823------2</link>
            <guid isPermaLink="false">https://medium.com/p/65cef43edf0c</guid>
            <category><![CDATA[mobil-uygulamalar]]></category>
            <category><![CDATA[yazılım]]></category>
            <category><![CDATA[coreml]]></category>
            <category><![CDATA[görüntü-i̇şleme]]></category>
            <category><![CDATA[ios-app-development]]></category>
            <dc:creator><![CDATA[Özgür Şahin]]></dc:creator>
            <pubDate>Sun, 26 May 2019 09:56:38 GMT</pubDate>
            <atom:updated>2019-05-26T09:56:38.624Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HktayRntK5xj21i6tpTNvw.jpeg" /><figcaption><a href="https://www.youtube.com/watch?v=DbVHe5A6rJs">Source</a></figcaption></figure><h3>iOS Geliştiriciler için Makine Öğrenmesi #5: Görüntü İşleyen Bir Uygulamada En İyi Frame’i Nasıl Yakalarız?</h3><p>iOS geliştiricilere makine öğrenmesi araçlarını tanıttığım yazı dizisinin önceki yazılarını okumadıysanız göz atmanızda fayda var.</p><blockquote><a href="https://medium.com/deep-learning-turkiye/ios-geli%C5%9Ftiriciler-i%C3%A7in-makine-%C3%B6%C4%9Frenmesi-2-b%C3%B6l%C3%BCm-createml-ile-yaz%C4%B1-s%C4%B1n%C4%B1fland%C4%B1rma-1e788e14db0e"><strong>CreateML ile Yazı Sınıflandırma</strong></a></blockquote><blockquote><a href="https://medium.com/nsistanbul/ios-geli%C5%9Ftiriciler-i%C3%A7in-makine-%C3%B6%C4%9Frenmesi-1-b%C3%B6l%C3%BCm-createml-ile-g%C3%B6r%C3%BCnt%C3%BC-s%C4%B1n%C4%B1fland%C4%B1rma-b93af68e7f46"><strong>CreateML ile Görüntü Sınıflandırma</strong></a></blockquote><blockquote><a href="https://medium.com/nsistanbul/mldatatable-nedir-nas%C4%B1l-kullan%C4%B1l%C4%B1r-b7a1e3e5b948"><strong>MLDataTable Nedir Nasıl Kullanılır?</strong></a></blockquote><blockquote><a href="https://medium.com/nsistanbul/ios-geli%C5%9Ftiriciler-i%C3%A7in-makine-%C3%B6%C4%9Frenmesi-4-do%C4%9Fal-dil-i%CC%87%C5%9Fleme-b1283a29576b"><strong>Doğal Dil İşleme</strong></a></blockquote><p>Bu yazıda başarılı bir görüntü işleme için düzgün görüntüyü nasıl yakalayabileceğimizi inceleyeceğiz.</p><p>iOS platformunda görüntü işleyen ve video buffer’ı tüketen herhangi bir uygulama geliştirdiyseniz, burada kaynaklarınızı (CPU, GPU) kullanırken temkinli olmanız gerektiğini bilirsiniz. Mesela görüntü işleyen bir uygulamada kamera akışını takip edip görüntüdeki nesneleri tahminlemeye çalışıyorsanız burada kullanıcı kamerasını sağa sola çevirirken her frame’i işlerseniz yanlış sınıflandırmalar yapabilir ve gereksiz kaynak tüketebilirsiniz. Yüksek doğrulukla görüntü tahminlemek için görüntü sahnesinin stabil olduğu anı yakalamanız gerekir. Bu sorunu çözmek için<strong> Apple’ın önerdiği bir yöntem va</strong>r bu yazıda bu yöntemi öğreneceğiz.</p><p>Frame’i yakalamak için basitçe iki seçeneğiniz var:</p><ol><li>Kullanıcı resim çekme butonuna basar.</li><li>Sürekli video bufferından gelen framelerden en iyisini otomatik olarak yakalarsınız.</li></ol><p>İlk seçenekte sahne sabitliğini kullanıcıya bırakırsınız. <a href="https://youtu.be/vIci3C4JkL0?t=67">Hot dog</a> uygulamasını hatırlayın o uygulamada kullanıcı fotograf çekip görüntü işlemek için bir butona basıyordu. Burada en iyi frame’i yakalama görevini kullanıcıya bırakıyoruz.</p><p>İkinci seçenekte ise kullanıcı kamerasını hedefe doğrulttuğunda en iyi görüntüyü yakalamak uygulamanın görevi.</p><p>Bu aşamada kullanıcının telefonu sabit tutup tutmadığını anlamamız ve sabit tuttuğunda görüntüyü yakalamamız gerekiyor. Neyse ki görüntü işleme tanrıları bu işi çözmüşler ve bize Vision frameworküyle registration algoritmasını sağlamışlar :)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*5VFvRyIVcsPyJ0HK.jpg" /></figure><h3>Görüntü Çakıştırma (Image Registration)</h3><p>Tam Türkçe karşılığını görüntü çakıştırma mı emin değilim. Makalelerde bu şekilde kullanılmış. Görüntü çakıştırma birbirine benzer görüntüleri hizalama problemi olarak tanımlanabilir. Bu görüntüler farklı zamanlarda, farklı açılardan veya farklı algılayıcılardan elde edilmiş olabilir.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/382/0*OOnaN3j6I4AB16x3.jpg" /><figcaption><a href="https://www.mathworks.com/discovery/image-registration.html">Source</a></figcaption></figure><p>Çakıştırma işleminde iki resmi üst üste bindiririz ve algoritma bize ‘Şu kadar pikseli kaydırırsan resimler tam olarak üst üste gelecek’ der. Frank Doepke’nin <a href="https://developer.apple.com/videos/play/wwdc2018/717/">Vision with CoreML sunumunda</a>, söylediği üzere ucuz ve hızlı bir algoritmadır ve bu algoritma bize kamerayı ne kadar sabit tuttuğumuzu ve kameranın önünde herhangi bir hareket olup olmadığını söyler.</p><p>Görüntü işleyen uygulamalarda kamera bufferından gelen her frame’i işlersek UI katmanında gecikmelere ve performans kaybına neden olabiliriz. Bu aşamada Apple, registration algoritmasını kullanarak sahnenin ve kameranın sabit olduğunu tespit etmemizi ve bu frame üzerinde sınıflandırma algoritmamızı çalıştırmamızı öneriyor.</p><h4>Resimler Arasındaki Farkı Nasıl Ölçeriz?</h4><p>Apple’ın <strong>Vision</strong> kütüphanesindeki <a href="https://developer.apple.com/documentation/vision/vntranslationalimageregistrationrequest"><strong>VNTranslationalImageRegistrationRequest</strong></a><strong> </strong>bu aşamada bize çözüm sağlıyor. Bu request ile video bufferdan gelen frame’in Vision kaynaklarını harcamaya deyip değmeyeceğini tespit edebiliyoruz. iOS platformunda kameraya erişen uygulamalar <em>captureOutput:didOutputSampleBuffer:fromConnection </em>delegate metodunu kullanarak framelere erişirler. Bu metot içinde aşağıdaki gibi registration requestini gibi çağırarak frame değişimini kontrol edeceğiz.Bu request ile görüntü analizi yaparak iki resmi hizalamak için gereken dönüşümü (affine transform) tespit edeceğiz.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e0a6bacad20ce2b4c2f558584cd8cc27/href">https://medium.com/media/e0a6bacad20ce2b4c2f558584cd8cc27/href</a></iframe><p>Burada sequenceRequestHandler (<a href="https://developer.apple.com/documentation/vision/vnsequencerequesthandler">VNSequenceRequestHandler</a>) ve <a href="https://developer.apple.com/documentation/vision/vntranslationalimageregistrationrequest">VNTranslationalImageRegistrationRequest</a> nesnelerini kullanarak son 15 frame’i kaydediyor ve art arda gelen frameleri karşılaştırıyoruz. <a href="https://developer.apple.com/documentation/vision/vnsequencerequesthandler">VNSequenceRequestHandler</a> ile bir seri frame’in (bu case için 15 frame) her biri üzerinde görüntü analizi yaptırabiliyoruz. Bu algoritmada iki frame arasındaki Manhattan uzaklığı 20&#39;den az ise sahnenin sabit olduğunu kabul ediyoruz.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3f81a131bb0825131330e9680b2c1bc4/href">https://medium.com/media/3f81a131bb0825131330e9680b2c1bc4/href</a></iframe><p>Her şey tamam da Manhattan uzaklığı nedir? Mesela koordinat sisteminde 𝑥=(𝑎,𝑏) ve 𝑦=(𝑐,𝑑) şeklinde iki noktamız olsun.</p><p>Öklid uzaklığına göre iki nokta arasındaki uzaklık, bu iki noktayı birleştiren doğrunun uzaklığıdır.</p><p>Manhattan uzaklığında ise uzaklık, bu noktalardan geçen ve dik kesişen doğru parçalarının uzaklığı kadardır yani|𝑎−𝑐|+|𝑏−𝑑|.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/830/1*s3y9kPr_8FuzMsaoNS1fvQ.png" /><figcaption><a href="http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Clustering_Parameters/Manhattan_Distance_Metric.htm">http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Clustering_Parameters/Manhattan_Distance_Metric.htm</a></figcaption></figure><p>Manhattan uzaklığını tespit ettikten sonra, image registration requestinin sonuçlarını kontrol edeceğiz. Burada request sonucu alignmentObservation.alignmentTransform olarak alınır ve sahnenin sabitliği kontrol edilir. RecordTransposition fonksiyonuyla son 15 frame sonucu transpositionHistoryPoints yığınına eklenir.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a3bbba1bbd868b128b02c848e92d6c96/href">https://medium.com/media/a3bbba1bbd868b128b02c848e92d6c96/href</a></iframe><p><em>SceneStabilityAchieved </em>fonksiyonu son 15 frame için bu sonuçları kontrol ederek sahne sabitliğini tespit eder. Yeterince stabil bir sahne yakalandıysa artık bu frame’i analiz etmek için <strong>Core ML’e </strong>iletebiliriz. Rastgele bir frame değil de bu şekilde sabit bir sahnenin olduğu frame’i ilettiğimiz için sınıflandırma algoritmamız daha başarılı çalışacaktır.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7a034f4db87c94b620743320296cd0dc/href">https://medium.com/media/7a034f4db87c94b620743320296cd0dc/href</a></iframe><h4>Sonuç</h4><p>Bu yazıda görüntü çakıştırma yöntemini ve görüntüler arası Manhattan uzaklığını ölçerek sabit bir sahneyi nasıl yakalayabileceğimizi öğrendik. Bu yöntemle görüntü işleyen Core ML uygulamalarımız daha performanslı çalışacak ve daha az kaynak tüketecektir.</p><p>ViewController’ın tüm kodunu <a href="https://gist.github.com/ozgurshn/e6eeb69238be0ba21582b61f1dfe1b87">burada</a> bulabilirsiniz.</p><p>Akıllı uygulama geliştirme yolunda bir adım daha attık bir sonraki yazıda görüşmek üzere.</p><p>Udemy’de hazırlayacağım iOS geliştiriciler için makine öğrenmesi kursundan beklentilerinizi şu <a href="https://docs.google.com/forms/d/e/1FAIpQLScOaLo2nIA_DrU0iFlyuBqtvUr8iwqH1N25q3Vgj2bjiS1afg/viewform">formdan</a> paylaşırsanız sevinirim.</p><p>Buraya kadar okuduğunuz için teşekkürler :) Benzer yazılardan haberdar olmak için beni <a href="https://t.co/43zLdXHvHx?amp=1">Medium</a>’da ve <a href="https://mobile.twitter.com/ozgr_shn">Twitter</a>’da takip edebilir mail bültenime abone olabilirsiniz.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2Fe%2F1FAIpQLSfXoFPaApmoRAGdQuKZFZh85ayInsK_0r-IcWQR2oa2Lzpi7Q%2Fviewform%3Fembedded%3Dtrue&amp;url=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2Fe%2F1FAIpQLSfXoFPaApmoRAGdQuKZFZh85ayInsK_0r-IcWQR2oa2Lzpi7Q%2Fviewform%3Fusp%3Dsend_form&amp;image=https%3A%2F%2Flh6.googleusercontent.com%2F3dnEeiWEucwR-41DW9bXGM6aVNXdeTZGP7sD4FpWU7GuWi39UgQ5WHC-H7M4LJMIeok%3Dw1200-h630-p&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=google" width="760" height="500" frameborder="0" scrolling="no"><a href="https://medium.com/media/3c496bcb56519dc8793c916c87ec44c4/href">https://medium.com/media/3c496bcb56519dc8793c916c87ec44c4/href</a></iframe><p>Esenlikle kalın, bol ML’li yıllar :)</p><h4>Kaynaklar</h4><ul><li><a href="https://developer.apple.com/documentation/vision/training_a_create_ml_model_to_classify_flowers">Training a Create ML Model to Classify Flowers</a></li><li><a href="https://developer.apple.com/videos/play/wwdc2018/717/">Vision with Core ML - WWDC 2018 - Videos - Apple Developer</a></li></ul><p><a href="https://pdfs.semanticscholar.org/c723/ebf73d5a4deb3e9a901450307dea92a8839d.pdf">https://pdfs.semanticscholar.org/c723/ebf73d5a4deb3e9a901450307dea92a8839d.pdf</a></p><iframe src="https://drive.google.com/viewerng/viewer?url=http%3A//people.sabanciuniv.edu/mcetin/publications/karabulut_SIU17.pdf&amp;embedded=true" width="600" height="780" frameborder="0" scrolling="no"><a href="https://medium.com/media/96e32e7c486389a5aa020ac6838e627a/href">https://medium.com/media/96e32e7c486389a5aa020ac6838e627a/href</a></iframe><iframe src="https://drive.google.com/viewerng/viewer?url=https%3A//dergipark.org.tr/download/article-file/286916&amp;embedded=true" width="600" height="780" frameborder="0" scrolling="no"><a href="https://medium.com/media/8d12d1c1e8c27e85427591ea52a9dbda/href">https://medium.com/media/8d12d1c1e8c27e85427591ea52a9dbda/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=65cef43edf0c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/nsistanbul/g%C3%B6r%C3%BCnt%C3%BC-i%CC%87%C5%9Fleyen-bir-uygulamada-en-i%CC%87yi-framei-nas%C4%B1l-yakalar%C4%B1z-65cef43edf0c">Görüntü İşleyen Bir Uygulamada En İyi Frame’i Nasıl Yakalarız?</a> was originally published in <a href="https://medium.com/nsistanbul">NSIstanbul</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>