Image Credit: Interesting Engineering
By Angela Choi
What first comes to mind when you consider how we use AI in our daily lives? Most of us think of Alexa, self-driving cars, or even our own Netflix recommendations. But AI has so much more to offer to countless industries, as it has the potential to revolutionize the health care sector forever.
AI-powered personal healthcare apps enable smart and efficient work processes that can improve patient experience and provide better services. These apps continuously collect data and check the vitals of the user, achieving a similar purpose to that of other virtual assistants like wearables and discrete monitors. The data collected, which is stored locally or online, can then be retrieved by medical professionals as a medical report.
For instance, WebMD, one of the most well-known symptom-checkers that millions of people use every day, built an app that uses machine learning to provide trusted information that has been reviewed by qualified physicians. Additional features of the app include medication reminders, fitness tracking, updates on the latest news in healthcare, and a directory of local physicians to help users arrange appointments.
Ada, which was developed in 2016 to relieve pressure on healthcare professionals, is another medical app that is now used in 140 countries to provide care to patients at home. With the app’s instant messaging design, Ada asks simple, relevant questions about the user’s symptoms to gain a better understanding of their health. Then, Ada determines the potential medical issue by pulling data from its virtual medical library, which stores data from thousands of similar cases. Through classification, clustering, and information extraction, this AI-powered doctor can offer advice to the user on what to do next, whether it be self-medication or seeking assistance from a nearby health professional.
Not only do these AI-based apps make healthcare more accessible to all, but they also can help address the issue of lack of expertise in certain areas of medicine. SkinVision, for example, is an app that can instantly diagnose skin issues without the patient having to see a dermatologist in person. Users simply upload an image of a potential skin problem, and the app will use AI to conduct a scan looking for signs of cancer. This assessment will generate a report of low, medium, or high risk, allowing users to immediately notify a doctor when a risk is detected. As more and more pictures are added to the app’s online database, it will be able to diagnose a wider variety of skin conditions with higher accuracy. Additionally, SkinVision encourages users to stay on top of their skin health by setting reminders for users to periodically retake the assessment.
Beyond the personal healthcare apps that we have now, the applications of artificial intelligence in the medical field will only continue to expand and help medical professionals treat patients more effectively. In fact, the total public and private sector investment in AI in the healthcare industry is predicted to reach $6.6 billion by 2021.
Although the future of AI in healthcare is uncertain, one thing is clear: there are many new, exciting breakthroughs that lie ahead.
Photo credit: Medium - Albert Lai
By Vaughn Luthringer
Computer vision and image recognition are pretty common terms nowadays. But their uses go far beyond Snapchat filters. Computer vision is, by definition, “how computers see and understand digital images and videos.” Yes, that can refer to how that dog filter gets put on your face. But, it can also refer to things much bigger, like, say, self-driving cars!
We’ve all heard of self-driving cars, autonomous cars, whatever you want to call them. We’ve heard a lot about the dilemmas that come with them, and the controversy surrounding the “futuristic” devices. What we’ve gotten less insight into is exactly how they work. So, let’s dive in!
Object detection is at the center of the function of self-driving cars. It’s broken up into two parts: object classification and object localization. In simple terms, what is the object, and exactly where is it?
Object classification is done by what is called a “convolutional neural network.” CNNs assign various levels of “importance” to objects in an input image, and are then able to differentiate objects from one another. The use of “sliding windows” allows the CNN to detect more than just singular objects that take up most of the input image. Sliding windows are “boxes” that move across an image, essentially creating smaller images for the CNN to analyze. Check out the header image on this article to see an example of sliding windows!
What about objects bigger or smaller than our boxes? This is where YOLO—”you only look once”—comes into play. YOLO is another algorithm, and it’s used to create a predictive grid, a “probability map” out of an image. YOLO makes predictions about what each cell of the grid is, using probabilities. These probabilities are then used in creating larger predictions of what the objects in the image are.
Now for object localization. Non-max suppression, another algorithm, is used to take into account that objects may span more than one grid cell. Grid cells with probabilities below a certain threshold are discarded, and the cells with the greatest probabilities are kept.
There’s obviously much more to learn about CNNs, YOLO, and non-max suppression. This is just a basic overview, but it does break down the way self-driving cars are able to “see” their surroundings. Using these algorithms, the cars can identify and locate pedestrians, traffic lights, other vehicles, and more.
All of this tech has to come together and function properly in order for an autonomous car to work correctly and safely. Object detection needs to work fast and have a very high accuracy. In the future, speed and accuracy can hopefully be improved so that self-driving cars can get out and on the road!
“A Comprehensive Guide to Convolutional Neural Networks - the ELI5 Way” (https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)
“How Do Self-Driving Cars See? (And How Do They See Me?)” (https://www.wired.com/story/the-know-it-alls-how-do-self-driving-cars-see/)
Medium - Sumit Saha (https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)
Medium - Albert Lai (https://towardsdatascience.com/how-do-self-driving-cars-see-13054aee2503)
Image Credit: Adobe Stock
By Kathy Xing
According to Wolters Kluwer, a Dutch American information services company, as many as a quarter of all organizations have incorporated a robot that imitates human conversation, be it a chatbot or a virtual assistant. Natural language processing (NLP) is a type of artificial intelligence meant to understand and mimic human conversational cadences. Today, NLP has various applications in predictive word suggestions and in voice-activated assistants such as Alexa and Siri. However, these purposes of NLP have found new applications during the global pandemic of COVID-19.
During this time, quick access to accurate information is crucial. NLP is able to facilitate the spread of up-to-date information and guidelines regarding the virus because of its applications in accurately translating content into the world’s many languages, especially when it comes to key phrases. Currently, platforms like Google Translate only support translations to 109 languages at various levels. However, on his blog, Daniel Whitenack, a data scientist with a PhD in Mathematical and Computational Physics from Purdue University, describes working with colleagues at SIL International to use Multilingual Unsupervised and Supervised Embeddings (MUSE), a Python library that utilizes multilingual word embeddings to enable NLP training across many languages, to ultimately translate the phrase “wash your hands” into 544 languages. For specifics on this process, follow the below link to Daniel Whitenack’s blog.
Aside from translation, NLP has also impacted access to and spread of information by assisting people’s search for answers regarding COVID-19. Various interfaces to answer COVID-19-related searches have been developed, such as covidsearch by researchers from Korea University and covidex by researchers from the University of Waterloo and NYU. These interfaces answer COVID-19-related questions based on CORD-19, the COVID-19 Open Research Dataset.
Finally, NLP has played a role in public health official’s responses to COVID-19. According to Health IT Analytics, researchers gathered 95,000 posts on a popular COVID-19 Reddit thread and identified 50 different discussion topics using NLP. By using NLP to track popular topics, leaders can better understand public health concerns and priorities as well as address community concerns. Real-time monitoring of platforms such as Reddit can enable faster responses to the various COVID-19-related questions of the general public. Furthermore, similar online platforms have been a source of misinformation regarding COVID-19. Better monitoring of these platforms means that public health officials can better combat and mitigate the spread of misinformation.
Overall, NLP has been applied regarding the spread of information in order to help combat COVID-19. It has played a role in providing people with information globally through accurate translations, answering commonly-searched questions regarding the virus, and gathering information about public discussion of the virus. The way NLP has been quickly applied only furthers the importance of NLP and the role it increasingly has on modern society.
Daniel Whitenack’s blog, datadan.io
Health IT Analytics