Artificial intelligence and machine learning have taken center stage – why

0
144

Artificial intelligence and machine learning have taken center stage – why

We have reached artificial intelligence (AI), and the interest in machine learning and deep learning has acquired a significant moment of traction – why? We are moving into a science fiction novel that is now the age of reality and reality.

Artificial intelligence and machine learning are not new concepts; Greek mythology is full of huge automata, such as the talos of Crete and the bronze robots of hephaestus. However, the concept of modern artificial intelligence, as we all know it, began in 1956 at Dartmouth College.

Since the 1950s, numerous studies, projects and projects in the field of artificial intelligence have been launched to fund billions of dollars. It also witnessed countless hype cycles. But only in the past five to 10 years has it really taken shape.

The rise of research and calculation.

Research computing has become synonymous with high performance computing (HPC) over two decades, a preferred tool in areas such as astrophysics. But over the past two decades, many other areas of scientific research that require computing power have fallen outside the traditional HPC system.

In bioinformatics, for example, it is a to develop methods and software tools to understand biological data, such as the human genome research, need more computing power, but for many existing high-performance computing systems have very different requirements. The quickest way, however, is to fill these systems – the existing HPC is just not for use.

That’s where the study comes in. You can’t just have one system for all research types, you need to diversify and provide a service or platform. From there, high-performance computing systems are set up to meet different workload demands – such as the high memory nodes needed to process and analyze a large number of complex biological data.

Even so, scientific researchers are very good at depleting the available resources of supercomputers – the difficulty of finding an idle high performance computing system, or the ability to have more research projects.

With the need for larger systems, universities are focusing on cloud platforms to help with scientific research. This is one reason why cloud technologies such as OpenStack have started to gain a foothold in higher education.

You can build supercomputers on commercial hardware – reasonable prices, easy access, usually compatible with a variety of technologies, and can be plugged into land – and used for everyday research. The cloud, then, can cause the organization to “cloud” to the public cloud with too complex or large work for the commodity HPC system.

Public cloud providers soon discovered this opportunity, this is why we see companies like amazon and Microsoft in building now contains graphics processing unit (GPU) and type of HPC InfiniBand connection infrastructure invested a lot of work.

Large-scale development of high performance computing system and the ability of using a public cloud infrastructure makes research become more computing intensive, this has also benefited from more and more use of GPU, which in essence is to increase scientific research achievements.

This combination of advanced technology enables people to study deep learning and machine learning in a more meaningful sense, which is a pioneer of modern artificial intelligence systems. Although deep learning and machine learning algorithms have existed for many years, the computing power of parallel running wide data sets is not available within any useful time range.

Now, you can use multiple gpus in a cluster system, and you can use a lot of complex algorithms to process large amounts of data. And it can do this within a timeframe, and it is now possible to make deep learning and machine learning programs financially viable.

It is this research and calculation of the traditional and research computing platform that combines public cloud and GPU progress to realize artificial intelligence. Artificial intelligence has a huge interest, and most cutting-edge researchers are trying to understand how cognitive computing can be applied to their research and provide competitive advantage.

No region can benefit from artificial intelligence

Where there is data, it is possible to benefit from AI. If you already have a dataset for insight or results (almost everyone is doing it!). Then you already have a training dataset that will teach you how to help your algorithm. Using AI in this way has many possible implications:

Help human decision-making. The algorithm recommends and provides reasoning that allows people to accept or reject recommendations. This will be related to areas where your doctor can provide AI – assisted diagnosis.

Making life more interesting, any task that a person can accomplish in a second is likely to be completed by AI. Why is a person stuck in a repetitive task when he’s learning to do so, allowing people to freely do more complex tasks and have more free time?

Improve efficiency without compromise. The SKA is a good example of how human beings can’t see and evaluate a large amount of data produced by a project – before the analysis, a lot of data is thrown away. While much of the SKA’s data may be just “noise” or temporary files, it is still worth considering what might be included in the discarded data. If you apply “AI on the line” to this data and analyze it, you can provide higher clarity and certainty for the data you retain and discard.

Study the future of computing and artificial intelligence

AI has always been computing power and the ability to compute, network, and store connections together. Previously, the connection between computing and storage was not fast enough, but as Mellanox and Intel introduced the InfiniBand and Ethernet speed limits,

There is an alternative to architecture. Intel traditionally has a monopoly on computing, but ARM is becoming more competitive and OpenPOWER is pushing new technologies.

This enhanced competition means that we will see new combinations and combinations of technology and suppliers in the same system, which will undoubtedly have a positive impact on the research and computing disciplines.

In the long run, the real success of real ai will be to connect different HPC systems and frameworks. Figuring out how to connect them is very complicated and is a real challenge, but when the problem is solved, we will definitely enter the golden age of research and computation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here