When do correlations increase with firing rates in recurrent networks? Barreiro, A. K., & Ly, C. (2017). PLOS Computational Biology, 13(4), e1005506.
Consequences of the Oculomotor Cycle for the Dynamics of Perception. Boi, M., Poletti, M., Victor, J. D., & Rucci, M. (2017). Current Biology, 27(9), 1268–1277.
The Head-Direction Signal Plays a Functional Role as a Neural Compass during Navigation. Butler, W. N., Smith, K. S., van der Meer, M. A. A., & Taube, J. S. (2017). Current Biology, 27(9), 1259–1267.
Predicting explorative motor learning using decision-making and motor noise. Chen, X., Mohr, K., & Galea, J. M. (2017). PLOS Computational Biology, 13(4), e1005503.
Feedback Synthesizes Neural Codes for Motion. Clarke, S. E., & Maler, L. (2017). Current Biology, 27(9), 1356–1361.
Direct Brain Stimulation Modulates Encoding States and Memory Performance in Humans. Ezzyat, Y., Kragel, J. E., Burke, J. F., Levy, D. F., Lyalenko, A., Wanda, P., … Pedisich, I. (2017). Current Biology, 27(9), 1251–1258.
A map of abstract relational knowledge in the human hippocampal–entorhinal cortex. Garvert, M. M., Dolan, R. J., & Behrens, T. E. (2017). eLife, 6(e17086).
Sequential sensory and decision processing in posterior parietal cortex. Ibos, G., & Freedman, D. J. (2017). eLife, 6(e23743).
Active Dentate Granule Cells Encode Experience to Promote the Addition of Adult-Born Hippocampal Neurons. Kirschen, G. W., Shen, J., Tian, M., Schroeder, B., Wang, J., Man, G., … Ge, S. (2017). Journal of Neuroscience, 37(18), 4661–4678.
Subsampling scaling. Levina, A., & Priesemann, V. (2017). Nature Communications, 8, 15140.
Noise-enhanced coding in phasic neuron spike trains. Ly, C., & Doiron, B. (2017). PLOS ONE, 12(5), e0176963.
Spatial working memory alters the efficacy of input to visual cortex. Merrikhi, Y., Clark, K., Albarran, E., Parsa, M., Zirnsak, M., Moore, T., & Noudoost, B. (2017). Nature Communications, 8, 15041.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning. Meyniel, F., & Dehaene, S. (2017). Proceedings of the National Academy of Sciences of the United States of America, 114(19), E3859–E3868.
Statistical learning in social action contexts. Monroy, C., Meyer, M., Gerson, S., & Hunnius, S. (2017). PLOS ONE, 12(5), e0177261.
Saccadic eye movements impose a natural bottleneck on visual short-term memory. Ohl, S., & Rolfs, M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(5), 736–748.
Correlates of Perceptual Orientation Biases in Human Primary Visual Cortex. Patten, M. L., Mannion, D. J., & Clifford, C. W. G. (2017). Journal of Neuroscience, 37(18), 4744–4750.
Medial Entorhinal Cortex Selectively Supports Temporal Coding by Hippocampal Neurons. Robinson, N. T. M., Priestley, J. B., Rueckemann, J. W., Garcia, A. D., Smeglin, V. A., Marino, F. A., & Eichenbaum, H. (2017). Neuron, 94(3), 677–688.e6.
Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. Schwalger, T., Deger, M., & Gerstner, W. (2017). PLOS Computational Biology, 13(4), e1005507.
Homeostatic Plasticity Shapes Cell-Type-Specific Wiring in the Retina. Tien, N.-W., Soto, F., & Kerschensteiner, D. (2017). Neuron, 94(3), 656–665.e4.
Robust information propagation through noisy neural circuits. Zylberberg, J., Pouget, A., Latham, P. E., & Shea-Brown, E. (2017). PLOS Computational Biology, 13(4), e1005497.
Graphics research from Daniel Sýkora et al at DCGI, Czech Republic, presents a method of realtime style transfer focused on human faces, similar to their previous StyLit work.
It should be noted that the video below is a demonstration of results, is silent, and the official paper has not been made public yet:
Results video for the paper: Fišer et al.: Example-Based Synthesis of Stylized Facial Animations, to appear in ACM Transactions on Graphics 36(4):155, SIGGRAPH 2017.
[EDIT: 20 July 2017]
An official video (no audio) and project page has been made public on the project:
We introduce a novel approach to example-based stylization of portrait videos that preserves both the subject’s identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.
Link
5G, AI & THE NWO’S “SMART” CONTROL GRID
This story is already doing the rounds but is still very interesting - Machine Learning research from Georgia Tech manages to clone game design from a video recording.
The top GIF is the reconstructed clone, the bottom gif is from the video recording:
Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.
Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.
To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”
Their current work uses Super Mario Bros. and they’ve started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.
More Here
Video from Yingtao Tian presents anime characters generated using GAN Neural Networks:
You can create your own using the webtoy MakeGirlsMoe here
https://vimeo.com/175247441
SP. 103 - Ghost in the Shell: The New Movie (2015)
Today, we’re celebrating the Red Planet! Since our first close-up picture of Mars in 1965, spacecraft voyages to the Red Planet have revealed a world strangely familiar, yet different enough to challenge our perceptions of what makes a planet work.
You’d think Mars would be easier to understand. Like Earth, Mars has polar ice caps and clouds in its atmosphere, seasonal weather patterns, volcanoes, canyons and other recognizable features. However, conditions on Mars vary wildly from what we know on our own planet.
Viking Landers
Our Viking Project found a place in history when it became the first U.S. mission to land a spacecraft safely on the surface of Mars and return images of the surface. Two identical spacecraft, each consisting of a lander and an orbiter, were built. Each orbiter-lander pair flew together and entered Mars orbit; the landers then separated and descended to the planet’s surface.
Besides taking photographs and collecting other science data, the two landers conducted three biology experiments designed to look for possible signs of life.
Pathfinder Rover
In 1997, Pathfinder was the first-ever robotic rover to land on the surface of Mars. It was designed as a technology demonstration of a new way to deliver an instrumented lander to the surface of a planet. Mars Pathfinder used an innovative method of directly entering the Martian atmosphere, assisted by a parachute to slow its descent and a giant system of airbags to cushion the impact.
Pathfinder not only accomplished its goal but also returned an unprecedented amount of data and outlived its primary design life.
Spirit and Opportunity
In January 2004, two robotic geologists named Spirit and Opportunity landed on opposite sides of the Red Planet. With far greater mobility than the 1997 Mars Pathfinder rover, these robotic explorers have trekked for miles across the Martian surface, conducting field geology and making atmospheric observations. Carrying identical, sophisticated sets of science instruments, both rovers have found evidence of ancient Martian environments where intermittently wet and habitable conditions existed.
Both missions exceeded their planned 90-day mission lifetimes by many years. Spirit lasted 20 times longer than its original design until its final communication to Earth on March 22, 2010. Opportunity continues to operate more than a decade after launch.
Mars Reconnaissance Orbiter
Our Mars Reconnaissance Orbiter left Earth in 2005 on a search for evidence that water persisted on the surface of Mars for a long period of time. While other Mars missions have shown that water flowed across the surface in Mars’ history, it remained a mystery whether water was ever around long enough to provide a habitat for life.
In addition to using the rover to study Mars, we’re using data and imagery from this mission to survey possible future human landing sites on the Red Planet.
Curiosity
The Curiosity rover is the largest and most capable rover ever sent to Mars. It launched November 26, 2011 and landed on Mars on Aug. 5, 2012. Curiosity set out to answer the question: Did Mars ever have the right environmental conditions to support small life forms called microbes?
Early in its mission, Curiosity’s scientific tools found chemical and mineral evidence of past habitable environments on Mars. It continues to explore the rock record from a time when Mars could have been home to microbial life.
Space Launch System Rocket
We’re currently building the world’s most powerful rocket, the Space Launch System (SLS). When completed, this rocket will enable astronauts to begin their journey to explore destinations far into the solar system, including Mars.
Orion Spacecraft
The Orion spacecraft will sit atop the Space Launch System rocket as it launches humans deeper into space than ever before. Orion will serve as the exploration vehicle that will carry the crew to space, provide emergency abort capability, sustain the crew during the space travel and provide safe re-entry from deep space return velocities.
Mars 2020
The Mars 2020 rover mission takes the next step in exploration of the Red Planet by not only seeking signs of habitable conditions in the ancient past, but also searching for signs of past microbial life itself.
The Mars 2020 rover introduces a drill that can collect core samples of the most promising rocks and soils and set them aside in a “cache” on the surface of Mars. The mission will also test a method for producing oxygen from the Martian atmosphere, identify other resources (such as subsurface water), improve landing techniques and characterize weather, dust and other potential environmental conditions that could affect future astronauts living and working on the Red Planet.
For decades, we’ve sent orbiters, landers and rovers, dramatically increasing our knowledge about the Red Planet and paving the way for future human explorers. Mars is the next tangible frontier for human exploration, and it’s an achievable goal. There are challenges to pioneering Mars, but we know they are solvable.
To discover more about Mars exploration, visit: https://www.nasa.gov/topics/journeytomars/index.html
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Last episode of the season from tocotocotv profiles developer responsible for many music-related video games, most famously recognized for Rez.
Remember to Click on “CC” to activate English subtitles:
Our last episode of the season features Tetsuya Mizuguchi, game creator and founder of the Enhance Games studio. Mizuguchi is the creator behind the iconic Rez, recently remastered as Rez Infinite, he also created Lumines, Child of Eden or Space Channel 5, which are all gaming experiences strongly influenced by music. Always ahead of his time, Mizuguchi will tell us more about his motivations behind Rez Infinite, and his pursuit of new forms of perception, supported by new technologies such as virtual reality, or through original concepts such as the Synesthesia Suit. Our day with Mizuguchi will take us to the heights of Mori Tower, then to the intimacy of the Restaurant Bohemian, where we will learn more about his philosophy and his work.
Link
Another machine learning experiment from Samim explores regression method to moving image, breaking down each frame into visual compartments creating a polygon / Modernist style:
Regression is a widely applied technique in machine learning … Regression analysis is a statistical process for estimating the relationships among variables. Lets have some fun with it ;-)
… This experiment test a regression based approach for video stylisation. The following video was generated using Stylize by Alec Radford. Alec extends Andrej’s implementation and uses a fast Random Forest Regressor. The source video is a short by JacksGap.
You can find out more about the machine learning experiment here