{"id":3462,"date":"2019-10-16T19:51:26","date_gmt":"2019-10-16T18:51:26","guid":{"rendered":"http:\/\/dronesonen.usn.no\/?p=3462"},"modified":"2019-12-05T08:22:15","modified_gmt":"2019-12-05T07:22:15","slug":"haptix-week-41","status":"publish","type":"post","link":"https:\/\/dronesonen.usn.no\/?p=3462","title":{"rendered":"Haptix Week 41"},"content":{"rendered":"\n<p>Had a meeting&nbsp;in&nbsp;the Cave at the University. The electro students had received their components, so we&nbsp;looked at these together. In addition, we&nbsp;went through the Leap Motion device and&nbsp;the&nbsp;HTC&nbsp;Vive.&nbsp;<\/p>\n\n\n\n<p><strong>Petter<\/strong><strong>:<\/strong>&nbsp;<\/p>\n\n\n\n<p>I\u2019ve finished the&nbsp;rubix&nbsp;system which basically describes how each piece interacts with the pieces around&nbsp;it&nbsp;.&nbsp;I still need to adjust how the pieces interact with each other, as there\u2019s some fun glitches going on, but it\u2019s looking pretty good.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Hoping to start work on some custom&nbsp;rubix&nbsp;color designs next week.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"530\" height=\"477\" src=\"http:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-45.png\" alt=\"\" class=\"wp-image-3463\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-45.png 530w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-45-300x270.png 300w\" sizes=\"auto, (max-width: 530px) 100vw, 530px\" \/><\/figure>\n\n\n\n<p><strong>Daniel:<\/strong>&nbsp;<\/p>\n\n\n\n<p>After going through the Leap Motion device in depth, I figured that I could start&nbsp;creating the interface between Unity and the Haptic gloves.&nbsp;Rather&nbsp;than&nbsp;developing&nbsp;another&nbsp;interaction&nbsp;for&nbsp;the&nbsp;Unity&nbsp;application.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"359\" src=\"http:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-44-1024x359.png\" alt=\"\" class=\"wp-image-3464\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-44.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-44-300x105.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-44-768x269.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>This is something I discussed with the electro students, specifically Herman. Who said that his intentions with the Haptic gloves were to make them&nbsp;wireless.&nbsp;Basically,&nbsp;that&nbsp;he would use an Arduino master to feed output to 2 Arduino slaves (1 per glove).&nbsp;<\/p>\n\n\n\n<p>I can then try to create a Unity script that reacts to the input from the Leap Motion.&nbsp;Later,&nbsp;have that Unity script feed information to the Arduino master, which will then feed to the&nbsp;arduino&nbsp;slaves.&nbsp;<\/p>\n\n\n\n<p>So\u00a0for the next few weeks, this is what I will be doing.\u00a0<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Even:<\/strong>&nbsp;<\/p>\n\n\n\n<p>Using&nbsp;Reinforcement&nbsp;learning AI to make a&nbsp;competitive environment&nbsp;<\/p>\n\n\n\n<p>To be able to compete&nbsp;against something or someone&nbsp;solving a&nbsp;Rubix&nbsp;cube, we decided to&nbsp;investigate&nbsp;how to use artificial intelligence to make&nbsp;an&nbsp;opponent.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The AI will use reinforcement learning. Reinforcement learning&nbsp;(<strong>RL<\/strong>)&nbsp;is a technique used to train the AI&nbsp;to solve a specific problem. The&nbsp;concept&nbsp;is that you reward the AI when it does something&nbsp;positive&nbsp;and&nbsp;penalize&nbsp;it when it does something bad. The AI\u2019s goal is to achieve the highest reward possible.&nbsp;&nbsp;<\/p>\n\n\n\n<p>In unity there is a&nbsp;tool&nbsp;called ML Agents&nbsp;which&nbsp;allows you to train the&nbsp;\u201cagents\u201d that will interact with the environment. The general workflow is:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"523\" src=\"http:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-46-1024x523.png\" alt=\"\" class=\"wp-image-3465\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-46.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-46-300x153.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-46-768x392.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>Figure&nbsp;<\/em><em>1<\/em><em>&nbsp;Democratize machine learning: ML-Agents explained &#8211; Unite LA<\/em>&nbsp;<\/p>\n\n\n\n<p>Because&nbsp;the&nbsp;Rubix&nbsp;cube&nbsp;is not finished we need to train our&nbsp;model&nbsp;on a different(copied) cube.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"870\" height=\"466\" src=\"http:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-48.png\" alt=\"\" class=\"wp-image-3466\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-48.png 870w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-48-300x161.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-48-768x411.png 768w\" sizes=\"auto, (max-width: 870px) 100vw, 870px\" \/><\/figure>\n\n\n\n<p>&nbsp;To ensure that the training goes&nbsp;as fast as possible we need to define some&nbsp;parameters&nbsp;and milestones. This is what my&nbsp;research&nbsp;time is being used&nbsp;for now. When the parameters and&nbsp;milestones are&nbsp;set,&nbsp;we can start training the agent. This is&nbsp;a&nbsp;iterative process&nbsp;described in the diagram underneath.&nbsp;&nbsp;<\/p>\n\n\n\n<p><em>Figure&nbsp;<\/em><em>2<\/em><em>&nbsp;Democratize machine learning: ML-Agents explained &#8211; Unite LA<\/em>&nbsp;<\/p>\n\n\n\n<p><strong>Actions<\/strong>&nbsp;<\/p>\n\n\n\n<p>Action describes how many actions the cube can do at any given time. There are 12 actions in total.&nbsp;6&nbsp;sides, clockwise and counter clockwise for every side.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>States<\/strong>&nbsp;<\/p>\n\n\n\n<p>The 3x3x3 cube has&nbsp;43 252 003 274 489 856 000&nbsp;possible&nbsp;combinations, or&nbsp;\u224843 quintillion. This means that brute forcing (running through all the combinations) the cube is not an optimal solution.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Going forward<\/strong>&nbsp;<\/p>\n\n\n\n<p>The plan the coming weeks is to set up the plugin\u00a0and configure it in unity. We need to start learning how to use the plugin and start training\u00a0less\u00a0sophisticated models for learning purposes. After we have a good understanding of how it\u00a0works, we\u00a0can ether train it on our own cube, if its finished, or we\u00a0can\u00a0train it on another cube\u00a0to keep the process going.\u00a0<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Herman<\/strong><strong>&nbsp;and Tom Erik<\/strong>&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignright is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/72886984_498766220705483_6843216919370137600_n-1.jpg\" alt=\"\" class=\"wp-image-3469\" width=\"298\" height=\"333\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/72886984_498766220705483_6843216919370137600_n-1.jpg 858w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/72886984_498766220705483_6843216919370137600_n-1-268x300.jpg 268w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/72886984_498766220705483_6843216919370137600_n-1-768x859.jpg 768w\" sizes=\"auto, (max-width: 298px) 100vw, 298px\" \/><\/figure><\/div>\n\n\n\n<p>The communication between Unity and the gloves will be separated into three different units.&nbsp;First,&nbsp;we have the hub that is going to open the communication between Unity and the gloves and process the data so that the hub and the gloves speak the same \u201clanguage\u201d. The last two units are the processing units on the gloves which will translate the signals from the hub into actions and measurement data to a transmittable signal to send to the hub. These units will also do most of the calculations surrounding the finger position from the flex sensors and the IMU on the glove.&nbsp;&nbsp;<\/p>\n\n\n\n<p>To replace the Leap motion camera, we were originally looking into replicating the tracking system of the&nbsp;Vive&nbsp;unto the glove but realized quickly that this would propose too big of a challenge and would consume too much time to be possible in this iteration of the glove. This will be substituted by attaching the&nbsp;vive&nbsp;controllers tracking module on the back of your palm and then use the&nbsp;vives&nbsp;tracking system this way.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-47.png\" alt=\"\" class=\"wp-image-3467\" width=\"313\" height=\"269\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-47.png 705w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2019\/10\/image-47-300x259.png 300w\" sizes=\"auto, (max-width: 313px) 100vw, 313px\" \/><\/figure>\n\n\n\n<p><strong>William:<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Main-Menu user interface is\u00a0being\u00a0worked on.\u00a0My plan is to finish the Menu part of the interface\u00a0and make it more suitable for our purpose.\u00a0I am also going to work with Even on the Reinforcement learning part.\u00a0Looking forward we have a big task\u00a0when we are going to be\u00a0implementing\/configure\u00a0Reinforcement learning\u00a0into Unity.\u00a0We must make a sophisticated overview\u00a0for the learning part (how we want it to communicate and so on).\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Had a meeting&nbsp;in&nbsp;the Cave at the University. The electro students had received their components, so we&nbsp;looked at these together. In addition, we&nbsp;went through the Leap Motion device and&nbsp;the&nbsp;HTC&nbsp;Vive.&nbsp; Petter:&nbsp; I\u2019ve finished the&nbsp;rubix&nbsp;system which basically describes how each piece interacts with the pieces around&nbsp;it&nbsp;.&nbsp;I still need to adjust how the pieces interact with each other, as [&hellip;]<\/p>\n","protected":false},"author":72,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[59,1],"tags":[],"class_list":["post-3462","post","type-post","status-publish","format-standard","hentry","category-haptix","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts\/3462","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/users\/72"}],"replies":[{"embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3462"}],"version-history":[{"count":1,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts\/3462\/revisions"}],"predecessor-version":[{"id":3470,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts\/3462\/revisions\/3470"}],"wp:attachment":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3462"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3462"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3462"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}