"A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting."
"In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner."
"Our key contributions are as follows: first, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training."
"Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting."
"Our neural implementation marks the first time a single architecture has achieved competitive results in both multi-task and continual learning settings."
"Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve."
The level of technological development of any civilization can be gaged in large part by the amount of energy they produce for their use, but also encompasses that civilization's stewardship of their home world.
Following the Kardashev definition, a Type I civilization is able to store and use all the energy available on its planet.
In this study, we develop a model based on Carl Sagan's K formula and use this model to analyze the consumption and energy supply of the three most important energy sources: fossil fuels (e.g., coal, oil, natural gas, crude, NGL and feedstocks), nuclear energy and renewable energy. We also consider environmental limitations suggested by United Nations Framework Convention on Climate Change, the International Energy Agency, and those specific to our calculations to predict when humanity will reach the level of a Kardashev scale Type I civilization.
Our findings suggest that the best estimate for this day will not come until year 2371.
"A cognitive walkthrough is a technique used to evaluate the learnability of a system. Unlike user testing, it does not involve users (and, thus, it can be relatively cheap to implement). Like heuristic evaluations, expert reviews, and PURE evaluations, it relies on the expertise of a set of reviewers to assess the interface.
Although cognitive walkthroughs can be conducted by an individual, they are designed to be done as part of a group in a workshop setting where evaluators walk through a task in a highly structured manner from a new user’s point of view."
"Arbor is a portable, high-performance library for computational neuroscience simulations with multi-compartment, morphologically-detailed cells, ranging from single cell models to very large networks. Optimisations make Arbor an order of magnitude faster than the most widely-used comparable simulation software. Download Arbor as a C++ library and integrate it in your own program, or install it as a Python library (through pip) and import in any Python script."
The effects of robotics and artificial intelligence (AI) on the job market are matters of great social concern. Economists and technology experts are debating at what rate, and to what extent, technology could be used to replace humans in occupations, and what actions could mitigate the unemployment that would result.
To this end, it is important to predict which jobs could be automated in the future and what workers could do to move to occupations at lower risk of automation.
Here, we calculate the automation risk of almost 1000 existing occupations by quantitatively assessing to what extent robotics and AI abilities can replace human abilities required for those jobs.
Furthermore, we introduce a method to find, for any occupation, alternatives that maximize the reduction in automation risk while minimizing the retraining effort.
We apply the method to the U.S. workforce composition and show that it could substantially reduce the workers’ automation risk, while the associated retraining effort would be moderate.
Governments could use the proposed method to evaluate the unemployment risk of their populations and to adjust educational policies. Robotics companies could use it as a tool to better understand market needs, and members of the public could use it to identify the easiest route to reposition themselves on the job market.
"This project explores the potential of using low-cost/low-bandwidth LoRa radios to build simple mesh networks that can pass text messages around town."
"Networks of this type could be useful for emergency communications or other applications that can take advantage of the fully-autonomous (carbon-neutral) nature of the birdhouse repeater stations."
"The design is made available for amateur (non-commercial) purposes in the spirit of experimentation and knowledge sharing amongst the ham community. At the very least, we're creating homes for some lucky birds in our area."
Humans perceive light in the visible spectrum (400-700 nm). Some night vision systems use infrared light that is not perceptible to humans and the images rendered are transposed to a digital display presenting a monochromatic image in the visible spectrum. We sought to develop an imaging algorithm powered by optimized deep learning architectures whereby infrared spectral illumination of a scene could be used to predict a visible spectrum rendering of the scene as if it were perceived by a human with visible spectrum light. This would make it possible to digitally render a visible spectrum scene to humans when they are otherwise in complete “darkness” and only illuminated with infrared light.
To achieve this goal, we used a monochromatic camera sensitive to visible and near infrared light to acquire an image dataset of printed images of faces under multispectral illumination spanning standard visible red (604 nm), green (529 nm) and blue (447 nm) as well as infrared wavelengths (718, 777, and 807 nm). We then optimized a convolutional neural network with a U-Net-like architecture to predict visible spectrum images from only near-infrared images.
This study serves as a first step towards predicting human visible spectrum scenes from imperceptible near-infrared illumination. Further work can profoundly contribute to a variety of applications including night vision and studies of biological samples sensitive to visible light.