Honoring CEO of Boston Global Forum, Nguyen Anh Tuan as Person of the Year 2018 in Vietnam

Vietnam National Television (VTV) announced and honored CEO of Boston Global Forum and Director of Michael Dukakis Institute for Leadership and Innovation, Mr. Nguyen Anh Tuan, as Person of the Year 2018 for Artificial Intelligence World Society and AI-Government.

The Artificial Intelligence World Society (AIWS) is a set of values, ideas, concepts and protocols for standards and norms whose goal is to advance the peaceful development of AI to improve politics, society, and the quality of life for all humanity. It was conceived by the Michael Dukakis Institute for Leadership and Innovation (MDI) and in November  2017.

AI-Government is a component of the AIWS. The concept of AI-Government was developed at the Michael Dukakis Institute for Leadership and Innovation, whose co-authors are Governor Michael Dukakis, Mr. Nguyen Anh Tuan, Professor Nazli Choucri, and Professor Thomas Patterson, was announced on June 25, 2018. MDI has attracted prominent scholars, experts, innovators, and policymakers contribute and develop AIWS.

AIWS had a great year with AIWS Report for G7-Summit 2018, World Leader in AIWS Award honor Secretary General of OECD, AIWS Ethics Index, AIWS Distinguished Lectures by Liam Byrne,  Keynote Lecture about “AIWS and AI-Government”  in 13rd INTELS conference about AI in Saint Petersburg, and in AI World Conference and Expo in Boston, AIWS Roundtable at Harvard, and Tokyo etc.

The Award Ceremony was live broadcasted on December 31, 2018, on the special program to welcome the New Year 2019 by VTV.

The financial services sector is making progress towards deploying Machine Learning

In a recent survey conducted by MIT Technology Review Insights in association with Google, marketers in the financial services industry are among the most progressive adopters using machine learning to streamline operations and optimize business outcomes. According to the research, 41 percent of financial services marketers are currently using ML and another 30 percent are planning to deploy the technology within the year.

This is considered a great signal as innovation and in particular, the development of machine learning, are driving companies to adapt with the times. “ML is having a profound and transformational impact across every function in financial services, and marketing is one of the areas leading the way,” said Ulku Rowe, technical director for financial services at Google Cloud and former CTO at JPMorgan Chase. “ML is helping financial services marketers to keep up with constantly evolving consumer behavior and to ensure that they get the best value out of every marketing dollar spent.”

On one hand, financial services sector must stay ahead of the technology curve; on the other hand, marketers in the fields are under intense pressure to carry out accurate campaigns and conservative practices. Since the industry is among the sectors that require strict protection of customer database and measures to guarantee security as well as mitigate possible risks, machine learning needs deploying with great cautious. However, when needed, machine learning can support compliance and accounting teams with forensic analysis in following the money trail and spotting any anomalies.

The survey also shows that machine learning has great potential in anticipating customer needs with 60 percent of financial services marketers’ responses expressing their belief in the ability to capture the entire customer journey of machine learning. Meanwhile, 44 percent of respondents are using machine learning to assess the value of their future customers. Stephen Arthur, Google’s managing director for finance partnerships, mentioned that Google has been using machine learning to leverage data to understand and predict when someone is experiencing significant life events and help marketers reach them more effectively during those moments.

More and more applications of technology are being embedded in business operations around the world; however, it requires organizations to invest in both human and resources so that technology can be operated safely and effectively. Layer 7 “Business Applications for All of Society: Engage and Assist Businesses” of the AIWS 7-Layer Model developed by the Michael Dukakis Institute will guarantee advanced technologies, especially with AI remaining benevolent and free from risks of misuse, error, or loss of control and have positive effects to the society.

Artificial Intelligence is the focus in 2019 Cybersecurity Predictions

Artificial Intelligence is expected to affect Cybersecurity significantly in 2019.

WatchGuard Threat Lab research team

In 2019, cyber criminals will build noxious chatbots that try to socially engineer victims into clicking links, downloading files, or sharing private information. Those chatbots will be controlled by AI.

Candace Worley, Chief Technical Strategist, McAfee

While working with AI, each firm needs to supervise its training to avoid damage. Not only privacy regulations, but also legal, ethical, and cultural implications are required to guarantee that AI could handle real-life situations with fairness and responsibility.

Jason Rebholz, Senior Director, Gigamon

Based on available information, AI helps analysts make basic decisions. As a result, the analysts will get more time for advanced determination.

Morey Haber, CTO, and Brian Chappell, sr. director, enterprise & solutions architecture, BeyondTrust

With rapid evolution, AI could establish plans and make data collection, which leads to an increase in successful attacks. 2019 is predicted to have more attacks using AI and Machine Learning.

Malwarebytes Labs Team

In the future, malware will be created by Artificial Intelligence. It will be too dangerous if the AI creates, adapts, and communicates with malware. Morever, an AI will be able to tracking the formula of detecting from compromised computer then rapidly creating a new malware.

Mark Zurich, senior director of technology, Synopsys

Machine Learning and Artificial Intelligence are expected a lot for contributions in cybersecurity due to their ability to find threats with speed and exactitute. “However, many of the articles that I’ve been reading on this topic are expressing skepticism and concern that companies will be lulled into a false sense of security that their detection efficacy is acceptable through the application of ML/AI when that may not actually be the case.”, said Mark, “we should expect to see large companies continue to invest in this technology and startup companies touting ML/AI capabilities to continue to crop up in 2019.”

Ari Weil, Global VP of Product and Industry Marketing, Akamai

The dream of AI and Machine Learning could be reconsidered in 2019. “Whether the catalyst comes from forensic tools that miss detecting advanced threats until significant damage has been done, or monitoring and analytics software that fails to detect the root cause of an issue in a complex deployment environment, the industry will reawaken to the value of evolving specialists vs. purchasing intelligence,” accroding to Ari Weil.

Gilad Peleg, CEO, SecBI

AI will provide more power for attacks. It is logical that the AI hackers will grow faster and bigger and reach more successes in making cyberattacks. To prevent malicious activities, cyber protection needs AI’s support. “With machine learning and AI-driven response, security teams can automate triage and prioritization while reducing false positives by up to 91%,” Gilad Peleg said.

Jason Rebholz, Senior Director, Gigamon

The technology industry will see a significant boost and depend more on AI as the automation and AI play a larger role.

Malcolm Harkins, Chief Security and Trust Officer, Cylance

In 2019, AI-based technology could learn what’s sensitive and category it. This development will require a higher level of management and controlling data.

Rajarshi Gupta, head of AI at Avast

Ending the clone phishing is indispensable the role of AI. “I predict AI will become effective in dealing with these clone phishing attacks by detecting short-lived websites built for phishing. AI can move faster than traditional algorithms when identifying fake sites…” Rajarshi said.

Encouragement of new ideas, concepts, standards, norms, models, and innovations relating to AI is AIWS Initiative’s goal. For that purpose, the AIWS Standards and Practice Committee has continuously built the AIWS 7-Layer Model, a set of ethical standards for AI so that this technology is safe, humanistic, and beneficial to society.

UK invests £26.6 million in developing micro-robots to work in dangerous environment

Working in underground pipe systems not only wastes too much time and money, but also puts human in danger. Recently, the UK government decided to invest millions in developing micro-robots to work in and reach hazardous environment.

Small robots will be set up in underground pipe to make repairs. They could reach unsafe locations like offshore windfarms and nuclear decommissioning facilities.

With a £7.2 million government investment, scientists from four British universities led by a professor at the University of Sheffield, Kirill Horoshenkov, will develop devices which are 1 cm long with sensors and navigation systems.

As part of a £26.6 million investment by the UK government, it is hoped that devices will spell the end for disruption caused by the 1.5 million road excavations that happen annually.

A further 14 projects will focus on how to use robotics in hazardous environment with a fund of £19.4 million.

Skidmore said: “While for now we can only dream of a world without roadworks disrupting our lives, these pipe-repairing robots herald the start of technology that could make that dream a reality in the future.”

Sir Mark Walport, UKRI’s chief executive, said: “The projects announced today demonstrate how robots and artificial intelligence will revolutionize the way we carry out complex and dangerous tasks, from maintaining offshore wind farms to decommissioning nuclear power facilities.

“They also illustrate the leading role that the UK’s innovators are playing in developing these new technologies which will improve safety and boost productivity and efficiency.”

This project could be a step forward in developing robots and applications in society. However, there is still a need for control from humans. Boston Global Forum and Michael Dukakis Institute are studying the ethical framework for safely using robots and AI with the AIWS Initiative.

Why the overestimation of Artificial Intelligence is dangerous

“Artificial Intelligence has great potential to change our lives, but it seems that the term is being overused,” said Zachary Lipton, assistant professor at Carnegie Mellon University.

Billions of dollars are invested in AI startups, or AI projects at giant companies. The problem is, opportunities are overshadowed by those who claim too much about technological power.

At the MIT Technology Review’s EmTech conference, Zachary Lipton warned that hype would blur the followers of its limitations. It is increasingly difficult to distinguish between what is real progress and what is not an exaggeration.

AI technology is known as deep learning, it is very powerful in image recognition, voice translation and now it can help power everything from self-driving cars to translation applications on smartphones.

But technology still has significant limitations. Many deep learning models only work well with huge amounts of data. They often struggle in conditions of rapid changes in the real world.

In his presentation, Lipton also highlighted the tendency of AI boosters to claim human-like capabilities for the technology. The risk is that AI bubbles will make people believe in dominant algorithms like autonomous vehicles and clinical diagnoses.

“Policymakers don’t read the scientific literature,” warned Lipton, “but they do read the clickbait that goes around.” The media here is really complicated because it doesn’t work well enough to distinguish real strengths and PR tricks.

Lipton is not the only researcher to ring a warning bell. In the recent post, “Artificial Intelligence-The Revolution Hasn’t Happened Yet,” Michael Jordan, a professor at University of California, Berkeley, says that “AI is all too often bandied about as “an intellectual wildcard,” and this makes it harder to think critically about the technology’s potential impact.”

As AI is deployed to be used by business, industry, and private citizens, it is essential to that AI technologies remain benevolent and free from risk of misuse, error, or loss of control, according to Layer 7 of the AIWS 7-Layer Model developed by the Michael Dukakis Institute. Through Layer 7, and the Model as a whole, AIWS hopes to ensure that inviting AI into our lives will have positive effects.