1 – New Google Robot Shows Potential for Artificial Intelligence and Delivery Advances
This week, Google unveiled ‘Atlas, The Next Generation’, an improved humanoid robot engineered through its subsidiary Boston Dynamics. The new Atlas uses sensors in its body and head to achieve balance, avoid obstacles, assess its terrain, and manipulate objects. In a released video on YouTube, the 5’9″, 180-pound Atlas picks up two 10-pound boxes, and continues to keep working after having objects pulled away and being pushed to the ground by an engineer. Atlas is a product of Google’s “X” division, which also includes the company’s self-driving vehicles. Atlas’ purpose is still up for speculation; the company could potentially launch it for use in its piloted delivery services (taking place in Los Angeles and San Francisco), though it seems more likely that the humanoid robot may be used for defense and law enforcement initiatives, like bomb detection.
(Read the full article on Silicon Valley Business Journal)
2 – Facebook AI Research Launches Partnership Program
Facebook AI Research (FAIR) announced a new project on Thursday aimed at partnering with and accelerating AI and machine learning research with Europe. The Facebook AI Research Partnership Program will kick off with the donation of 25 cutting-edge GPU-based servers at various research institutions in the European Union, beginning with Germany. Dr. Klaus-Robert Müller at TU Berlin is the first program recipient, whose team will use the servers to perform image analysis of breast cancer and chemical modeling of molecules. Research institutions are invited to submit applications for a GPU donation; selections will be made based on specified criteria and research relevance.
(Read the full article on Research at Facebook)
3 – Artificial Intelligence Startup MedyMatch Launches
MedyMatch Technology is launching a new artificial intelligence product for the medical industry that uses real-time decision tools for improved diagnostics. Earlier this month, the Israel-based startup also announced Gene Saragnese, formerly Philips Imaging Systems CEO, as MedyMatch’s new chairman and CEO. The company plans to open a new location in Boston to provide support for its machine and deep learning collaborations. New support tools paired with emergency room imaging platforms can help medical professionals recognize hard-to-diagnose or otherwise obstructed conditions in patients. Better insights and improved diagnostics can help reduce costs for both healthcare institutions as well as individuals.
(Read the full article on HealthcareITNews)
4 – Google’s DeepMind Forms Health Unit to Build Medical Software
Google’s London-based DeepMind is continuing its forays in the healthcare industry by establishing DeepMind Health, a venture that includes partnering with the Imperial College London and the Royal Free London NHS Foundation Trust. The new division will start with 15 people, but is expected to grow rapidly, according to DeepMind Co-founder Mustafa Salesman. DeepMind Health team members have already developed a task-management app for clinicians, as well as Streams, a software that allows doctors to more quickly view medical results. Streams was created in tandem with the Royal Free Hospital. DeepMind Health’s ongoing aims are to provide the medical community with tools that help it to make sense of huge (and often overwhelming) influxes of data.
(Read the full article on BloombergBusiness)
5 – Stanford Researchers Use Dark of Night and Machine Learning to Shed Light on Global Poverty
Researchers at Stanford have developed a machine learning algorithm that can efficiently identify poverty zones based on information in satellite images. The ‘poverty-mapping’ technique is the work of algorithms that analyzed millions of high-resolution daytime and nighttime satellite imagery in known and likely high-poverty areas, such as parts of Sub-Suharan Africa. Using a technique called transfer learning, the system transferred what it learned about identifying certain types of infrastructure typically associated with more prosperous zones to different types of infrastructure (or lack thereof) in more impoverished ones. Assistant Professor Stefano Ermon said,
“When we compared our model with predictions made using expensive field-collected data, we found the performance levels were very close.”
Researchers hope that this algorithmic model will one day replace ground surveys, which are an expensive and time-consuming option currently used for poverty mapping.
(Read the full article on Stanford News)
Image credit: Silicon Valley Business Journal