Data Analyst

14th October 2020, Kathmandu

If you have come across a movie called Infinity Chamber, you would probably know about an artificial intelligence infused character called Howard, which displays an amazing understanding of emotions and feelings just like its human captive Frank. It’s just in a sci-fi movie for now, but are we heading towards that direction yet? Let’s imagine another situation, you are with your doctor, explaining about your medical conditions and your doctor is nothing but an advanced app in your smartphone listening to a conversation and capturing the videos. At the end of the conversation, it not only diagnoses the conditions but also recommends having further tests and puts for your actual physical doctor to review. Yes, this is about the future of technology that may not be entirely a utopia yet, but a centaur is driven by big data and Artificial Intelligence (AI).

The biggest asset of AI is data, lots of them. Whether you agree or not, we are already being influenced by the power of it. Let’s borrow some ideas from Yuval Harari’s 21 Lessons for the 21st Century to explain how we tend to fall for it. We check Google Maps even during our daily commute just because we all have some kind of unpleasant experience, like being stuck in traffic when we previously ignored the instructions provided by it. Eventually, the trust grows, and we start giving up our judgments to Google Maps. Similarly, we go to libraries less for information gathering, but Google what we’re looking for, opening ourselves to be influenced by the algorithms to believe what it wants us to believe. We buy on Amazon, seeing the highest rated products or choose the highest-rated doctors for our treatment process. These are all direct or indirect control using data, minor examples of how we are influenced by big data in our day-to-day lives.

To be direct, AI is here to stay and grow. Let’s not imagine AI as a character portrayed in movies with consciousness, falling in love, or destroying the creators. That would be a whole new level to build such machines, and we are not sure yet if AI as a non-organic substance can attain consciousness. Forget building the machines with consciousness, we don’t even fully know about consciousness itself. For now, we can only think of AI as a high-level software that chews lots of quality data (garbage in, garbage out), makes the quality decisions way above the average human decisions, or becomes a part of transhumanism to promote the cognitive capacity of human health. It is up to us to drive these technologies to have a meaningful impact. Let me explain relating this to healthcare since I work in that field. Although healthcare does not differ from any other field in adopting technologies, it is a sensitive area governed by scrutinized regulations. Still, AI and big data collectively are going to change the entire healthcare economy of $280b and growing 7.9% annually (Grand View Research Group 2020–2027) for the good. It can identify diseases, recommend treatment with sharper accuracy than its human counterparts, and change how the healthcare system works in terms of patient handling. It is an enormous opportunity for the healthcare sector to advance to every missed arena. The only thing needed is to continue embracing ethical practices and implement them for the sake of humanity.

Eventually, humanity is going to build intelligent machines that will understand the biochemical process of a human body, detect chemical changes, and predict the chances of having a disease like cancer or diabetes. It can be very helpful to calculate probabilities of pandemics way ahead of time so the government can have enough time to prepare for those situations. Such machines will use the power of data for data intuition created by pattern recognition methods (Christopher M Bishop, Pattern Recognition and Machine Learning, New York: Springer, 2007). We are already taking advantage of these methods to detect septic attacks or breast cancer. With time, we will have integrated AI devices for all major diseases. These devices can contain single or multi-disease detecting algorithms. Human partners will make sure to integrate these devices with each other in decision-making processes. Also, they will have control to make an individual or a combined decision. Such a control process on decision making will ensure against failure in a single device affecting others, avoiding mass errors, and allowing them to be updated simultaneously for the application of approved methods.

These AI devices eventually will develop themselves as superhuman, strictly adhering to the directions from their masters. Let’s take an example of doctors and how algorithms will rise as super doctors. Doctors train for years, which is feeding the data into their brain and training their billions of neurons to calculate probabilities. Later they use their intuition to make predictions on diseases, recognizing the patterns in AI terms. We all know humans have limitations on consuming and processing such data, whereas there won’t be any for AI. This huge data consumption will help it make better and multiple decisions in a limited time period. You may think AI can’t be a super doctor until it has compassion, but at least those machines won’t make any unethical errors by following the strict ethics coded to it. These super doctors will be a lot cheaper and in reach of every household in the world. The poorest person in the least developed section of a world will have better treatment reach than the current advanced medical process in the super-developed nations (Maged N. Kamel Boulos et al., “How Smartphones are Changing the Face of Mobile and Participatory Healthcare: An overview, with an example from eCAALYX”). They can diagnose any diseases, anytime and anywhere. Again, only a human-machine partnership can bring ethical and optimal results to society.

Of course, there will be extensive discussions and uprisings over job losses or ill-intended uses of these technologies. In the 19th century, a similar fear of job losses arose when industrial automation began. This was to garner support from the proletariat class to feed vested political interests. However, this entire process of automation created more jobs, giving people more options, and enhanced the economy. When drones were created, pilots feared for their jobs. Turns out, they need about 13 more people to fly drones to replace a single pilot (Kate Brannen, “Air Force’s Lack of Drone Pilots Reaching ‘Crisis’ Levels”, and Foreign Policy on January 15, 2015). People still prefer drones, even at the cost of more jobs, because advantages easily outweigh demerits. We might have to diversify our own skills to remain employable, but we will have time since politicians will debate for decades, slowing AI implementation.

To counter unethical and ill uses of AI devices, we definitely need strong laws that to make every involved party be responsible. We should use the brightest people to govern the data. If we fail, corporations or data owners might develop themselves as new superior classes. Ill faded hysteria is always present in any field, but we need very positive will power and sensible laws to diffuse such negative intentions.

AI is inevitable. The sooner we have a grasp of AI, the better our position will be. The healthcare sector definitely needs to build and catch up with AI technology on the clinical side that will go hand in hand with our frontline health workers. This will help clinicians make better decisions providing care. Imagine, if we already had such devices now, how helpful it would be to handle COVID situations.

Narayan P Devkota, MBA
Data Analyst, HCA, TN USA

LEAVE A REPLY

Please enter your comment!
Please enter your name here