Learn About Our Meetup

4500+ Members

Author: torontoai

[R] An integrated brain-machine interface platform with thousands of channels

biorxiv link


Brain-machine interfaces (BMIs) hold promise for the restoration of sensory and motor function and the treatment of neurological disorders, but clinical BMIs have not yet been widely adopted, in part because modest channel counts have limited their potential. In this white paper, we describe Neuralink’s first steps toward a scalable high-bandwidth BMI system. We have built arrays of small and flexible electrode “threads”, with as many as 3,072 electrodes per array distributed across 96 threads. We have also built a neurosurgical robot capable of inserting six threads (192 electrodes) per minute. Each thread can be individually inserted into the brain with micron precision for avoidance of surface vasculature and targeting specific brain regions. The electrode array is packaged into a small implantable device that contains custom chips for low-power on-board amplification and digitization: the package for 3,072 channels occupies less than (23 x 18.5 x 2) mm3. A single USB-C cable provides full-bandwidth data streaming from the device, recording from all channels simultaneously. This system has achieved a spiking yield of up to 85.5% in chronically implanted electrodes. Neuralink’s approach to BMI has unprecedented packaging density and scalability in a clinically relevant package.

submitted by /u/sensetime
[link] [comments]

[D] What about declarative knowledge in natural language understanding?

Consider toy riddles (sorry if contrived) like:


“The weather each day can be snowy, sunny or cloudy. If it’s cloudy or sunny, airplanes will fly. If it’s snowy, airplanes don’t fly, and the post won’t arrive. If airplanes fly, the post will arrive. Today the post arrived.”


“What is the weather?”

This can also be phrased as an entailment problem. The needed information is pretty straightforwardly contained in the text.

Needless to say, reading comprehension and entailment models at get this wrong. The ability to represent and utilize declarative knowledge seems essential for moving past fancy pattern matching. Does anyone know of toy “bAbI” style datasets for this? You could similarly generate such riddles pretty easily. Wondering if such skills can be more readily compositionally applied to tougher problems, as bAbI hasn’t exactly achieved that.

I’ve seen this: Declarative Question Answering over Knowledge Bases containing Natural Language Text with Answer Set Programming

But those questions are considerably more complex, some requiring external knowledge not contained in the text.

submitted by /u/rtk25
[link] [comments]

[P] Playing RTS games with audio recognition instead of using hands for input

So about two years ago I started getting shoulder aches, but I still wanted to play RTS games. That’s when I started working on a project to allow me to play certain games without using my hands.

At first it started off with 100ms audio and a slow 80ms delay afterwards to respond to inputs, right now I’ve brought it down to 50ms audio with a response time of 10ms.

Also using an eyetracker to move the mouse around so that it’s completely hands free.

A demo where I’m using the program to play Starcraft 2 can be found here with all the controls explained during the video:

The project has the recording tools needed for data collection, using a sliding window over the microphone input to generate 50ms audio files every 25ms.

I added some simple thresholding filters so that I can more easily get the right audio samples when I am recording them ( sibilants can get by with just a pitch threshold, others like finger snaps work best with high peak-peak thresholds )

I’m using neural nets with four layers in an ensemble to do the recognition part, and do some post-processing to make sure keyboard-inputs are done at the proper times with as little mis-clicks as possible.

I validate out-of-sample performance by recording some more sounds and analysing the outputs of the model in a few graphs ( ).

The post-processing tweaks I do after playing a match in a game, and alter the thresholds for input activation based on my experience during it ( maybe I felt the SHIFT key was pressed too late, or another key was way too trigger happy )

by analysing the model output of the match with the CSV output of the recognitions.

The program is multithreaded to ensure that I don’t lose audio recordings during the feature-engineering/evaluation phase.

A github with all the code can be found here:

As for the future, I think I want to make it record 30ms sounds read at 60hz, and maybe fool around with some CNNs to see if it improves the recognition.

Considering I also control the data collection, I can just add a few thousand more samples of certain sounds, so I might try training with 5000 samples per label instead of 1500.

submitted by /u/chaosparrot
[link] [comments]

Spotting Clouds on the Horizon: AI Resolves Uncertainties in Climate Projections

Climate researchers look into the future to project how much the planet will warm in coming decades — but they often rely on decades-old software to conduct their analyses.

This legacy software architecture is difficult to update with new methodologies that have emerged in recent years. So a consortium of researchers is starting from scratch, writing a new climate model that leverages AI, new software tools and NVIDIA GPUs.

Scientists from Caltech, MIT, the Naval Postgraduate School and NASA’s Jet Propulsion Laboratory are part of the initiative, named the Climate Modeling Alliance — or CliMA.

“Computing has advanced quite a bit since the ‘60s,” said Raffaele Ferrari, oceanography professor at MIT and principal investigator on the project. “We know much more than we did at that time, but a lot was hard-coded into climate models when they were first developed.”

Building a new climate model from the ground up allows climate researchers to better account for small-scale environmental features, including cloud cover, rainfall, sea ice and ocean turbulence.

These variables are too geographically miniscule to be precisely captured in climate models, but can be better approximated using AI. Incorporating the AI’s projections into the new climate model could reduce uncertainties by half compared to existing models.

The team is developing the new model using Julia, an MIT-developed programming language that was designed for parallelism and distributed computation, allowing the scientists to accelerate their climate model calculations using NVIDIA V100 Tensor Core GPUs onsite and on Google Cloud.

As the project progresses, the researchers plan to use supercomputers like the GPU-powered Summit system at Oak Ridge National Labs as well as commercial cloud resources to run the new climate model — which they hope to have running within the next five years.

AI Turns the Tide

Climate scientists use physics and thermodynamics equations to calculate the evolution of environmental variables like air temperature, sea level and rainfall. But it’s incredibly computationally intensive to run these calculations for the entire planet. So in existing models, researchers divide the globe into a grid of 100-square-kilometer sections.

They calculate every 100 km block independently, using mathematical approximations for smaller features like turbulent eddies in the ocean and low-lying clouds in the sky — which can measure less than one kilometer across. As a result, when stringing the grid back together into a global model, there’s a margin of uncertainty introduced in the output.

Small uncertainties can make a significant difference, especially when climate scientists are estimating for policymakers how many years it will take for average global temperature to rise by more than two degrees Celcius. Due to the current levels of uncertainty, researchers project that, with current emission levels, this threshold could be crossed as soon as 2040 — or as late as 2100.

“That’s a huge margin of uncertainty,” said Ferrari. “Anything to reduce that margin can provide a societal benefit estimated in trillions of dollars. If one knows better the likelihood of changes in rainfall patterns, for example, then everyone from civil engineers to farmers can decide what infrastructure and practices they may need to plan for.”

A Deep Dive into Ocean Data

The MIT researchers are focusing on building the ocean elements of CliMA’s new climate model. Covering around 70 percent of the planet’s surface, oceans are a major heat and carbon dioxide reservoir. To make ocean-related climate projections, scientists look at such variables as water temperature, salinity and velocity of ocean currents.

One such dynamic is turbulent streams of water that flow around in the ocean like “a lot of little storms,” Ferrari said. “If you don’t account for all that swirling motion, you strongly underestimate how the ocean is absorbing heat and carbon.”

Using GPUs, researchers can narrow the resolution of their high-resolution simulations from 100 square kilometers down to one square kilometer, dramatically reducing uncertainties. But these simulations are too expensive to directly incorporate into a climate model that looks decades into the future.

That’s where an AI model that learns from fine-resolution ocean and cloud simulations can help.

“Our goal is to run thousands of high-resolution simulations, one for each 100-by-100 kilometer block, that will resolve the small-scale physics presently not captured by climate models,” said Chris Hill, principal research engineer at MIT’s earth, atmospheric and planetary sciences department.

These high-resolution simulations produce abundant synthetic data. That data can be combined with sparser real-world measurements, creating a robust training dataset for an AI model that estimates the impact of small-scale physics like ocean turbulence and cloud patterns on large-scale climate variables.

CliMA researchers can then plug these AI tools into the new climate model software, improving the accuracy of long-term projections.

“We’re betting a lot on GPU technology to provide a boost in compute performance,” Hill said.

MIT hosted in June a weeklong GPU hackathon, where developers — including Hill’s team as well as research groups from other universities — used the CUDA parallel computing platform and the Julia programming language for projects such as ocean modeling, plasma fusion and astrophysics.

For more on how AI and GPUs accelerate scientific research, see the NVIDIA higher education page. Find the latest NVIDIA hardware discounts for academia on our educational pricing page.

Image by Tiago Fioreze, licensed from Wikimedia Commons under Creative Commons 3.0 license.

The post Spotting Clouds on the Horizon: AI Resolves Uncertainties in Climate Projections appeared first on The Official NVIDIA Blog.

Next Meetup




Plug yourself into AI and don't miss a beat