As the amount of data processed in the systems increases, support solutions are sought with artificial intelligence-based applications.
For a quarter-century, system administration has been described as a profession that makes software work uninterrupted and productive while managing software in the correct aspect. Even though this description sounds enough to explain what a system administrator does, in reality, it is not a sufficient explanation. Because, in practical terms, it is sophisticated to manage software in the right way. For that, a system engineer has to know the fluid structure of the infinite numbered parameters very well and must have the dexterity to foresee all possible and spontaneous problems which might lead to the most practical and the most suitable solutions.
Until a couple of years ago, these passionate system administrators were working with exceptional efforts to achieve this extremely complicated and stressful job. In past years, the growth in the number of software leads to an increase in the data size that has been processed in the systems. This became a reason for a decreased trust in the ability of pure human intelligence’s accomplishments on increasingly complex system administration. Thus, to rule out the human factor as much as possible, the search for supporting solutions to replace system administrators with artificial intelligence has begun.
Although the concept of ‘artificial intelligence’ sounds exciting, futuristic and somewhat uncanny, before asking “Is it possible to replace a system administrator with a robot?” let’s take a quick look at the most frequently used artificial intelligence-based application; autonomous driving.
What do you think about the amount of data that is necessarily processed to drive an autonomous car?
For an automobile to move with full automation (the vehicle drives itself with no manual interface), the analog data collected during the drive must be processed. By mentioning the processing data, it means the conversion of the data into information. Thus, as a first thing, collected analog data is converted into digital data, after that the Big Data which has been created by cameras, radars, and more sophisticated devices such as Lidar is being organized as structural and associational. All technologies that have been used for these three steps of processing are working in central processing units with the principle of Reliability Engineering while also being supported by spare and multi-level subsystems.
In these terms, all these mean that you have to process around 50 TB data to have a three-hour autonomous drive. For airplanes that create the same amounts of data in minutes, the situation is more complex and the amount of the data that will be processed is much more than the autonomous drive. Because of that, a drive with a fully automated car requires a heavily big sized capacity of discs and a processor that can do heavy parallel processing.
Artificial Intelligence instead of System Administrator
If so, how much daily data does it needs to be processed to do the work of a system administrator? Primarily, let us remember all data that system administrators work on are digital. That means the interpretation and the conversion of data are not in question. In this sense, the biggest data size that may occur for servers is between 1-5 GB, while in much bigger platforms (such as Hybrid Cluster, etc.) it can be told that the size of data that will be processed is around 100GB. In other words, we are talking about a much smaller data size compared to autonomous vehicles. In that case, do popular system administration methods used nowadays make it easy to operate systems in full automation? To answer that question, let’s see these administration methods:
Automation
Since the beginning of the 2000s, it had been believed that everything would be performed via automation and system administrators would be less needed than before. If you are administering your own systems, this belief is correct. Assuming you use the same operating systems and similar hardware in your all servers such as Google, AWS, or Facebook, then this is not that hard to achieve. However, the more you go beyond the standards, the more you create novel settings. These novel settings lead to unique issues and, to resolve them, you always need to open back doors.
If you use automation in systems you administrate, you will break the rule of administrating the system individually. Because even a very simple parameter change can make the system too complex to be managed by automation. In practical terms, the system engineer changes the parameter manually in the server at the scheduled time. He turns off and on the database servers in a controlled manner after that he declares the changed parameter to automation as if the automation made this change. As a result, the complexity has multiplied and now there is an automation database that needs to be updated continuously. Think about certain routines of the PHP Automation Manifest that you have developed is not functioning properly in the new operating system. Writing the routines again would be necessary. (The quick and temporary solution for your problem would be down-grading the new operating system and using your old packages. But your clients might not accept this situation.) Yet, you cannot develop an automation routine with automation and release management that has different releases for each customer. While the main aim of automation is to standardize, you create unique structures that have as many versions as possible.
If the technologies are being used by your customers makes you use new features, you need to make updates. If you have routines and customizations in an automation system developed for yourself, in case of upgrade you might face problems because of those.
The Strategy of All is in the Cloud.
In our time, we have 3 most commonly used cloud platforms; AWS, Azure, and GCP. In these platforms, the transition is moving on quickly, and it looks like it will continue in this way. Time will show if that produces successful results or not.
Micro-services (Kubernetes or OpenShift)
In the last days, there has been a perception as the system infrastructure functions independently (Agnostic) from a cloud platform, it has to be operated through micro-servers. But since it is not possible to convert all your services to micro-service, and since the monolith systems (one service – one server) has more advantages for system administration, this perception is not true. If in your service, hundreds of systems don’t have any obligation to work integrated, then micro-services are not a solution for you. In case you develop an app regarding the principal of 12 -Factor development, you can use micro-services for new services and servers.
Besides, all interventions of the system administrators to find and perform a solution by making research on Google increases the fragility of systems.We can understand that in the following century the replacement of system engineers by any solution has been produced for the name of full automation (Automation, Cloud, Micro-service) does not seem possible. Because every system to be managed requires a customized system administration and being able to manage a system by deactivating system engineers is getting much harder every day.
Photo by: Jacqueline McCray/www.unsplash.com