{"id":31280,"date":"2021-02-06T11:49:47","date_gmt":"2021-02-06T11:49:47","guid":{"rendered":"https:\/\/www.xatakaciencia.com\/computacion\/esta-ia-puede-interpretar-musica-que-interpreta-instrumento-solo-usando-senales-visuales"},"modified":"2021-02-06T11:49:47","modified_gmt":"2021-02-06T11:49:47","slug":"esta-ia-puede-interpretar-la-musica-que-interpreta-un-instrumento-solo-usando-senales-visuales","status":"publish","type":"post","link":"http:\/\/forocilac.org\/en\/esta-ia-puede-interpretar-la-musica-que-interpreta-un-instrumento-solo-usando-senales-visuales\/","title":{"rendered":"This AI can interpret the music played by an instrument only using visual cues"},"content":{"rendered":"<p>\n      <img decoding=\"async\" src=\"https:\/\/i.blogs.es\/964aa2\/audeothumb_c\/1024_2000.png\" alt=\"Esta IA puede interpretar la m\u00fasica que interpreta un instrumento solo usando se\u00f1ales visuales\">\n    <\/p>\n<p>Machine learning has helped a group of researchers at the University of Washington to devise a system, <a href=\"http:\/\/faculty.washington.edu\/shlizee\/audeo\/\">called Audeo<\/a>, which creates audio from silent piano performances. <\/p>\n<p><!-- BREAK 1 --><\/p>\n<p>That is, this artificial intelligence r<strong>ecreates the performing experience of musicians and their instruments using only visual cues<\/strong>.<\/p>\n<p><!-- BREAK 2 --><!--more--><\/p>\n<h2>Audeo<\/h2>\n<p>Audeo uses a series of steps to decode what&#039;s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then you need to translate that diagram into something that a music synthesizer will actually recognize as a sound a piano would make. <strong>This second step cleans up the data and adds more information, such as how hard each key is pressed and for how long<\/strong>.<\/p>\n<p><!-- BREAK 3 --><\/p>\n<div class=\"article-asset-video\">\n<div class=\"asset-content\">\n<div class=\"base-asset-video\">\n   <iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/8rS3VgjG7_c\" allowfullscreen><\/iframe>\n  <\/div><\/div>\n<\/div>\n<p>The researchers trained and tested the system using YouTube videos of the pianist <strong>Paul Barton<\/strong>. The lineup consisted of about 172,000 video frames of Barton playing music by well-known classical composers, such as Bach and Mozart.<\/p>\n<p><!-- BREAK 4 --><\/p>\n<p>Audeo&#039;s reliability in interpreting which song is being played is so high that it even surpasses that of song recognition apps: the applications correctly identified the piece that Audeo was playing approximately 86% of the time, <strong>while Audeo reached the 93%<\/strong>. <\/p>\n<p><!-- BREAK 5 --><\/p>\n<p>Audeo was trained and tested only on Paul Barton piano videos. Future research is needed to see how well it can transcribe music for any musician or piano player.<\/p>\n<p><!-- BREAK 6 --><script>\n (function() {\n  window._JS_MODULES = window._JS_MODULES || {};\n  var headElement = document.getElementsByTagName('head')[0];\n  if (_JS_MODULES.instagram) {\n   var instagramScript = document.createElement('script');\n   instagramScript.src = 'https:\/\/platform.instagram.com\/en_US\/embeds.js';\n   instagramScript.async = true;\n   instagramScript.defer = true;\n   headElement.appendChild(instagramScript);\n  }\n })();\n<\/script><\/p>\n<p> &#8211; <br \/> The news<br \/>\n      <a href=\"https:\/\/www.xatakaciencia.com\/computacion\/esta-ia-puede-interpretar-musica-que-interpreta-instrumento-solo-usando-senales-visuales?utm_source=feedburner&#038;utm_medium=feed&#038;utm_campaign=06_Feb_2021\"><br \/>\n       <em> This AI can interpret the music played by an instrument only using visual cues <\/em><br \/>\n      <\/a><br \/>\n      was originally published in<br \/>\n      <a href=\"https:\/\/www.xatakaciencia.com\/?utm_source=feedburner&#038;utm_medium=feed&#038;utm_campaign=06_Feb_2021\"><br \/>\n       <strong> Xataka Science <\/strong><br \/>\n      <\/a><br \/>\n            by <a\n       href=\"https:\/\/www.xatakaciencia.com\/autor\/sergio-parra?utm_source=feedburner&#038;utm_medium=feed&#038;utm_campaign=06_Feb_2021\"><br \/>\n       Sergio Parra<br \/>\n      <\/a><br \/>\n      . <\/p>\n<p><img decoding=\"async\" src=\"http:\/\/feeds.feedburner.com\/~r\/xatakaciencia\/~4\/KfS12egjaZA\" height=\"1\" width=\"1\" alt=\"\"\/><\/p>","protected":false},"excerpt":{"rendered":"<p>\n      <img decoding=\"async\" src=\"https:\/\/i.blogs.es\/964aa2\/audeothumb_c\/1024_2000.png\" alt=\"Esta IA puede interpretar la m\u00fasica que interpreta un instrumento solo usando se\u00f1ales visuales\"><\/p>\n<p>Machine learning has helped a group of researchers at the University of Washington to devise a system, <a href=\"http:\/\/faculty.washington.edu\/shlizee\/audeo\/\">called Audeo<\/a>, which creates audio from silent piano performances. <\/p>\n<p><!-- BREAK 1 --><\/p>\n<p>That is, this artificial intelligence r<strong>ecreates the performing experience of musicians and their instruments using only visual cues<\/strong>.<\/p>\n<p><!-- BREAK 2 --><!--more--><\/p>\n<h2>Audeo<\/h2>\n<p>Audeo uses a series of steps to decode what&#039;s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then you need to translate that diagram into something that a music synthesizer will actually recognize as a sound a piano would make. <strong>This second step cleans up the data and adds more information, such as how hard each key is pressed and for how long<\/strong>.<\/p>\n<p><!-- BREAK 3 --><\/p>\n<div class=\"article-asset-video\">\n<div class=\"asset-content\">\n<div class=\"base-asset-video\"><\/div>\n<\/p><\/div>\n<\/div>\n<p>The researchers trained and tested the system using YouTube videos of the pianist <strong>Paul Barton<\/strong>. The lineup consisted of about 172,000 video frames of Barton playing music by well-known classical composers, such as Bach and Mozart.<\/p>\n<p><!-- BREAK 4 --><\/p>\n<p>Audeo&#039;s reliability in interpreting which song is being played is so high that it even surpasses that of song recognition apps: the applications correctly identified the piece that Audeo was playing approximately 86% of the time, <strong>while Audeo reached the 93%<\/strong>. <\/p>\n<p><!-- BREAK 5 --><\/p>\n<p>Audeo was trained and tested only on Paul Barton piano videos. Future research is needed to see how well it can transcribe music for any musician or piano player.<\/p>\n<p><!-- BREAK 6 --><\/p>\n<p> &#8211; <br \/> The news<br \/>\n      <a href=\"https:\/\/www.xatakaciencia.com\/computacion\/esta-ia-puede-interpretar-musica-que-interpreta-instrumento-solo-usando-senales-visuales?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=06_Feb_2021\"><br \/>\n       <em> This AI can interpret the music played by an instrument only using visual cues <\/em><br \/>\n      <\/a><br \/>\n      was originally published in<br \/>\n      <a href=\"https:\/\/www.xatakaciencia.com\/?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=06_Feb_2021\"><br \/>\n       <strong> Xataka Science <\/strong><br \/>\n      <\/a><br \/>\n            by <a href=\"https:\/\/www.xatakaciencia.com\/autor\/sergio-parra?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=06_Feb_2021\"><br \/>\n       Sergio Parra<br \/>\n      <\/a><br \/>\n      . <\/p>\n<p><img decoding=\"async\" src=\"http:\/\/feeds.feedburner.com\/~r\/xatakaciencia\/~4\/KfS12egjaZA\" height=\"1\" width=\"1\" alt=\"\"><\/p>","protected":false},"author":19,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[125],"tags":[],"class_list":{"0":"post-31280","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-portal-3"},"aioseo_notices":[],"_links":{"self":[{"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/posts\/31280","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/comments?post=31280"}],"version-history":[{"count":5,"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/posts\/31280\/revisions"}],"predecessor-version":[{"id":31571,"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/posts\/31280\/revisions\/31571"}],"wp:attachment":[{"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/media?parent=31280"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/categories?post=31280"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/forocilac.org\/en\/wp-json\/wp\/v2\/tags?post=31280"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}