A new website for Talking About Glaucoma!
April 9, 2024

VR Visual Field testing with Abdullah Sarhan, ep 39

VR Visual Field testing with Abdullah Sarhan, ep 39

One companies' road to AR & VR vision screening and the future roadmap

The player is loading ...
Talking About Glaucoma podcast

In this episode, I talk with Abdullah “Abed” Sarhan, CEO and Co-founder of Retinal Logic, about the company's use of AR and VR technology for vision screening. Abbott shares his background in deep learning and healthcare, the development of the virtual reality prototype, the device's availability and Health Canada approval, and its differentiation from other companies' technologies. He also discusses the various test sizes and algorithms available, ongoing research, user experience, and the company's future roadmap.

Visit https://TalkingAboutGlaucoma.com to find all my social feeds, subscribe to the newsletter, provide feedback and Register as a Guest if you want to be on a future episode.

About Abbott Sarhan and selected references can be found at: https://www.talkingaboutglaucoma.com/guests/abdullah-sarhan/

Transcript

VR Visual Field testing with Abdullah Sarhan, ep 39

[00:00:00] You're listening to talking about glaucoma. Bringing the latest advances in glaucoma eye care providers and patients since 2009. Visit TalkingAboutGlaucoma.com for more details about each episode. I'm Rob Schertzer, a Vancouver based glaucoma specialist and educator, and we are. Talking about glaucoma.

[00:00:23] Introduction and Background of the Guest

Abed Sarhan, CEO and Co-founder of Retinal Logic. Welcome to the show.

Hello. Hello, welcome and thank you for having me today.

Oh my, my, pleasure. The company that you founded is leveraging the power of, of AR and VR to enhance access to vision screening to everyone. Love to talk about the current product that is, out there now and uh, how you got there and where you're going in the future. So

Yeah.

take

Yeah.

[00:00:54] The Journey to Retinal Logic

love to share, yeah, I would love to share my background how I got here. Maybe I'll [00:01:00] start first where I started and how I end up with what we're doing, what you're doing, and have a description about the product itself and what the roadmap what we're trying to solve. I would say. But yeah, like my name is Abbott. That's my short name. I did my PhD at University of Calgary. my PhD was in the field of deep learning applied to vision mostly on glaucoma detection. So I was mostly using deep learning modules for analyzing retinal images for glaucoma, such as like segmenting vessels, cup disc, and other features, and be able to, Objective measures for trying to predict glaucoma and I.

When you say segmented vessels, what do you mean?

So basically when we have a fundus image, there are the vessels inside that fundus image. So I will, I build deep learning model to segment that in a very precise way into the arterials. I would say the thin vessels, I would say in those images.

Okay.

So [00:02:00] that's basically, so my, my, my idea is there are so many deep learning models out there and so many, like you input an image, you output results or like classification, whether it's LOC or not, but those usually are concerned as black box and doesn't really drive a lot of trust with specialists like

yourself.

[00:02:20] Deep Dive into the Technology

So my approach was different. So instead of. Having that image and output glaucoma or not. I said, let me input an image and extract the features, extract the cup, the disc, the vessels, and try to look what an ophthalmologist would be looking at in that image and extract those features in addition to a new ones

Oh, got it.

show that

Saying, this is what glaucoma looks like.

Yeah. And try not to black box because this will drive trust and will drive confidence with the results. And there is a huge trend and a huge movement now toward such type of approaches so that you can do build that trust with those models, I would say, rather [00:03:00] than black boxes. So that's basically where I've done my PhD. I've done it. So I've been working closely with Dr. Creten and Calgary. We we've been working on that project during my degree. Before that I've also did my master's at University of Calgary. So my master's was in the field of natural language processing and machine learning.

So my project was. Type a question and then I will go analyze that question and retrieve data from a relational database. At that time, there was very few people working on that one. Once I graduated, there are like couple of startups trying to work on a similar field. I would say so. And during

that

cool. So now your worlds are colliding here.

yeah. So like together, try to build components,

but.

listeners I should have mentioned Andy Creon, who you had worked with is a glaucoma specialist in Calgary

and

Yeah,

he is a Yeah.

he's a glaucoma specialist there.

[00:03:58] The Inspiration Behind the Innovation

But what [00:04:00] drive drove me to healthcare is basically me coming from a developing world. We had so many limitations in healthcare systems, and when I was doing my master's, I had the opportunity to work with the Comic School of Medicine on a project in Uganda. So I had to travel to Uganda to do a pilot project there that can help in digitizing data collection so that they could decide which services a community would need and redirect that as well. And during that time, I had to meet people, I had to meet, interact with local as well, and thought that the healthcare that I used to face growing up that are also like in Uganda and also other countries as well. And then after that through, when I came back, I told my supervisor at that time in my PhD, I wanna do something in healthcare that has impact. And when I like, create that, and that's like when I started my basically journey in the ophthalmology field at that time. So I used also to teach courses at University of [00:05:00] Calgary and I used to supervise some students and publish, but once. I started my PhD as well. I Used to shadow doctors, I would say, and see what the clinical flow is trying to understand because that's what helping me in my research and while shadowing with with doctors. There I saw this big visual field machine and I like saw people how like sometimes they suffer with it or like they have some challenges doing it. And for me, I thought maybe there is a better way. To do it. And I went home at that time and started thinking, okay, what the what we can do better in that one. And I thought of v virtual reality at that point,

Yeah.

and I've started developing a prototype at that point. And that's how it started.

[00:05:47] The Evolution of the Product

The idea of the company or the idea of the startup, I would say.

What sort of issues were you noticing with patients aside from the usual things like their head backing away from the from the machine [00:06:00] and

So the,

yeah.

yeah, so there are like some people having issues with their necks. I would say. They took them some time. Usually the elderly as well. They had some limitations as well. And

And

that's the population that has glaucoma. So they're the ones who

need the visual field tests and need accurate results.

Yeah, and basically the younger population are people, less risk of glaucoma, I would say.

[00:06:26] Challenges and Solutions in Vision Screening

But another factor I found that when I start looking how the exam run, because I start observing, people need to fix to fixate on a point, and basically if you move your eye or like after your exam is done, there is nothing that gives you like instant feedback or instant monetization, and the exam is very long as well.

There are a couple of factors in addition that it needs a full room to be able to do those exam. And you could see the bottlenecks in the clinic sometime because based on how, I see how the flow is, the visual [00:07:00] field sometimes is the most time consuming one time-wise in the clinic, I would say,

I ne

I never thought of feedback while you're doing the test, so I'd like to hear more about that because when I learned how to touch type, I didn't end up learning it in high school. I learned it once I was in medical school and I used a computer program. I. On my Commodore 64 where letters and then strings of letters and phrases, and then full sentences would invade from the sky and had to shoot them down by typing them correctly.

So there I getting great feedback and that really worked for my brain and learning how to type.

Yeah. And that helped you actually sometimes to adjust how you're doing it or get better on it or get more precise or better in handling that. And that's similar in Fields.

The idea with Fields is because they are very they take lots of time, people can lose focus and people sometimes don't really know if I am doing it right or wrong. I know in some countries [00:08:00] they dedicate like around 45 minutes only for the visual field. And like they did it as a very, it's, it is a very serious exam, but takes so much time with them And

And also

of the example.

it takes doing the visual field four to six times for them to overcome the learning curve and start being more accurate at doing it.

Yeah. And sometimes you need as you mentioned, couple fields

as well. It may take. Four, five months, you need to, and all of this are time consuming our limitations, our bottlenecks in the clinic as well. So specialist needs to mitigate that to be able to provide proper diagnosis at that point. And it is one of the like important exam because it tells you functionally if you're actually seeing the stuff or not. anD that combined with the retinal images provide a very valuable information that sometimes is very hard to obtain.

Otherwise,

it's really neat.

would say. Yeah, so that's, I, so I came back and I thought there is, and I started like doing some research, okay. How the visual field is how the [00:09:00] machine price, all of this stuff. And I start think, okay, is there a way to do it better? And I started with a prototype at that point, and that was like late 2019. And I do remember at that time I saw, I show it to two ophthalmologists and two optometrist at that point, and they both, they all said, yeah, there is a potential for this.

There is a need for such one, but the key is reliability. You need to make sure that the results are reliable. Results are precise, I would

say. And while I was, so I took it from there. I was finishing my PhD so things were a little bit slowly while I finishing my degree. At the same time, once I finished my PhD, I switched into working on this full-time and I've been getting very like good feedback and it is receptive from the community, especially ophthalmologists, I would say, and optometrist as well, who've been supportive, who've been like trying to see this technology.

[00:09:57] The Impact of Retinal Logic

I'm trying to see the potential of it, [00:10:00] especially when you can have this technology used to provide, to show impact and provide it for people who can't really afford those big machine or can't really get those machines in their own communities and their own like places. so We started with that.

Since then, we grew across Canada. At this point. We have couple of publications coming out. We built collaboration with couple of ophthalmologists as well. And optometrist as well who's been excited about it and when I like use it in their community work, in their mobile clinics, in their own clinics. And the potential of it is not only visual field. Visual field is just one exam, but there are so many exams you could do on virtual reality and you could combine that with other data points. I do remember I started with retinal images,

But. Visual fields, combining that with retinal images in the clinic, that can be a great asset to show you progression and show you detection of condition.

That may be a little bit challenging. Otherwise, [00:11:00] I would

For example, something that isn't glaucoma

when you'll end up, if it corresponds with this area in the retina, where there's a lesion that you might have otherwise missed,

Yeah, a

hundred percent. And that's basically where what I feel the direction is. Visual field is just one exam, but. The vision is beyond visual field. The vision is a platform where people can access proper vision, screening a quality one that provide the reliable results and prevent or allow early detection of conditions.

Because once you detect, then there are some medication you could apply to, like slow progression or solve that challenge, but. Something like glaucoma, once it advances very high, like you can't really retrieve the vision that has been lost at that point at least till now. We'll see what research says in the future about that.

right.

You mentioned that the cost savings in terms of getting this screening into locations where we [00:12:00] might. Otherwise not be able to do the testing. So current machines are about 40 or $50,000 plus ongoing maintenance and pl

on top of that a lot of those two, there's additional viewing software.

If you want to connect that to your computer, you have to pay another 20 or $30,000 for that software. So

what are we looking at for the current device that you have available and.

Yeah, a hundred percent. Something to add on top of that, in addition to that code that you have

there is like the table, the chair, there's also the room

And the need for a technician to be there all the time

to be. And that's also a cause that sometimes clinic have to face especially if in like very busy clinics,

I would say.

Yeah. No,

So those combine them together. With our, like the first version of the headset that we're using, you basically eliminate majority of those. You still need, a technician doesn't need to be beside the patient. They could monitor the [00:13:00] exam remotely or like from a different office. But there is like support for multiple languages. Patients get their own feedback. The thing, the field itself adjusts to how the patient is doing so that it can help them and guide them throughout the exam through interactive simulation. I would say there, so currently the way that we're doing it we're, we have two modules, I would say. One is subscription and one is upfront. The subscription is like $300 per month. And the upfront is like $10,000. Majority of the clinic, when we were trying to do the customer discovery majority, they were going, oh, we want upfront. But once we started selling, I would say majority to subscription, because with subscription you get access. Basically the full platform, full upgrades, everything on the platform you don't really [00:14:00] need to worry about and warranty all the time. thE thing with virtual reality headset, those devices last four to five years max, and there's so much advancement happening in those fields. So three, four years from now, you just get another device. More advanced, more precise, and different exams, I would say, or new exams with those technology. And that's why some people prefer subscription because you could update and upgrade with no cost,

Yeah. Subscription a lot. A lot of software has become subscription and a.

Yeah.

A, a main reason for that is to have the ongoing support from the developer that way, and that way they have a steady flow of income coming in to keep them developing. They'd be outta business and you

wouldn't want that, you wouldn't want to pay.

People pay $10,000 and the company's gone. So subscription's much healthier model for business as well.

a hundred percent. And when you're paying 10,000 for a VR headset, it. It depends on what's their, [00:15:00] how their finances sometimes work, but you're paying 10,000 for a device maximum would last around five years, and then after that. You won't get any upgrade, you won't get any you need to buy a new device.

If you decide to continue with that, with subscription, basically that you don't really need to worry about it because when there is a new update, you just get it for free at that point. And when we are talking about the progression and detection and integration with other data points, that's all of it in a centralized online platform and that you don't really need to worry about the same challenge that you used to face now, like the how, like the current visual field, for instance, where you have another license to connect it to like a porter or like a line platform with virtual reality, you don't really need to do that. It's all connected to one platform

or centralized platform that you could use.

You also mentioned the different languages. So from what I understand [00:16:00] with your retinal logic headset, the the person who's undergoing the test, they get instructions from the headset itself, right?

Yeah, so is this optional?

I would say so. Patients will have interactive tutorial. Someone will be guiding them throughout the exam and all of it through the VR headset, and it'll speak in their own language. And the way that we created those languages is we created in a way so that we don't use Google Translate. We basically use local people who can create the same experience so that it can explain for them what they need to do in their own language. And during that tutorial, it's interactive. We basically monitor how the patient is doing and that will inform how well they are doing in the exam itself.

Yeah that's an incredible advancement over what we have now, where there are so many patients that I have that . Are from all over . So

some, [00:17:00] the majority of my patients don't actually speak English as their first language, so this would be a huge help for my patients and for many others.

yeah, what's the main language you would say a majority of your patients would speak if you can

Mandarin, Cantonese, Punjabi. Yeah.

And those basically languages we have on

our platform. anD yeah, like even during the exam, if the patient lost focus or like they are depends, they are like have high false positive. I would say. It also warns them like, oh, stay focused this exam, you can blink, you can, you just need to stay focused. And we try to minimize the duration by speeding up the exam at the same time, not affecting the label of the results as well.

In terms of availability and health Canada approval. I believe you, you've told me in the past this is class one approved and applying for class two. If you could just explain what that means.

Yeah. Yeah. So [00:18:00] with Health Canada is it is classified because we're doing screening. It is class one, so that's what. Health Canada, when you're doing screening, you can just class one. You can do class one, and that's what clinics can use at this point. In Canada, in us, I would say it's all class one there. However, if you wanna say monitoring, which is like the progression

analysis and being able to claim that. That usually puts companies in Class two and to be able to do that class two from our side, we don't really need to change anything on the algorithms or on the application itself. It's more on the quality management system or like obtaining specific certification, which we're already now applying and we actually are in the process of getting the class to in the next few months, I would say.

That's amazing. Now I. I've seen at conferences a actually not too much at conferences yet, but coming up like at the Canadian Ophthalmological Society meeting coming up in [00:19:00] May, 2024, where I'm involved in the program planning for that, there have been a lot of submissions for use of vr in ophthalmology.

How does your technology differ from those of other companies that are out there?

A hundred percent. And yeah, there are like Arise used for using virtual because of like the portability of it and the ease of use of it. The, what differentiate us is on the analytics aspect and how we also conduct the exam through our virtual reality approach. We use light-based technology. By light-based stimulus, it actually mimics the behavior the same as the current visual fields which allow the results comparable. A previous version we had, we use a contrast based stimulus sizes and contrast based stimulus sizes. Doesn't really filter the noise between the foreground and the background of the screen, which make it a little bit gibberish and make it a little bit like not very clear,

I would say. [00:20:00] So we take into account that one by making it light based and also we take into account the distance, how we're doing the fields. Although it's a ball, VR is known for a distance and basically we try to keep that measurement while still using light in the field. So that's one user experience as well.

We've been receiving very positive feedback from clinics on how the user experience and the action of patients as well, which make it streamlined and make it like very smooth for patients and the analytics aspect as well. Given our background, given the research we've been working on for quite some time, that's what give also Like a differentiation I would say on what's out there. I wouldn't be surprised where like few more came up, I would say in the next couple years. I think the question is on the quality and the research that being conducted, which will be a big deal in being able to use those in the clinic and the user experience aspect.

[00:20:57] Future Plans and Closing Remarks

And although visual [00:21:00] fields one exam, having a vision beyond the visual field. That's something also useful and helpful in one doing in clinics, I would say.

Okay. And the test size object. Is it like a, you have size three and size five? Is that what you offer?

Yeah. So we have size three, size four, size

five as well. Those what we found the most common ones. We also have different algorithms, different grids. Like week 24 dash two, 10 dash two, even full hundred 20 as well. We have that integrated in the platform. Esterman as well, integrated in the platforms.

There are like couple of exams. We also have like color vision contrast as well. Some of those we are still like, because we wanna also publish research on some of those exams. So we're doing some studies and some testing on some people as well. So we have some articles published there.

cool. Yeah. I'd love to see the math of how you figured out the actual test size [00:22:00] because the, these objects now are right in front of the eye instead of 40 centimeters away in the perimeters. You had to translate those sizes into what they would be when they're at your face, right?

We had to do lots of projections. We had to be actually know how the unity conversion system works. Exactly. We had to do a couple of testing like last, like I would say Early last year, we had like a version which used the contrast, which used like big stimulus and it was like very close to the eye. And we found that it doesn't really work out with so many people, I would say, because of how the VR actually works. So we change it and we test it on like patients. I would say mainly they are glaucoma suspect, which is basically the population that visual field are very important and useful for them, I would say.

Got another question for you. I'm not sure if your device is doing this or you've even considered it. And I talked about this probably back when [00:23:00] I did an episode with Chris Johnson. I. Oh, 10 or 12 years ago on visual fields. And that's the phenomenon. When visual fields get worse in patients as they get more progressive disease, there's more fluctuation in the test because of the, flattening of the frequency of seeing curve, and the only way around that, at least at the time, was to use a bigger test object.

That makes the frequency of seeing curve more vertical. So you're there, it gets rid of that bigger fluctuation. Is that something you've considered or something? Potentially in the future.

We are working on something close to that. I would say that actually take into consideration also the stimulus fixation point as

well, including the stimulus size as well as people go or get older. We do have some data that could help us in making the proper projections once it comes to virtual reality. And there are like couple of things [00:24:00] coming out in the research. So we have some research going on behind the scenes that should be released around next year. I would say once it comes to visual field themselves, I would say.

Cool. Any other things on the roadmap to throw in there?

I think the 2020 2023 has been great. We we achieved a lot as a company especially being here in Calgary. We received so much support from the community. People has been excited for it and supportive for the technology. 2024 will be very exciting. There are so many new exams coming out, so many publications coming out, I would say. And that's, I wanna, I'm stressing on publication because those basically show the reliability and show how those such exams are being validated and being utilized. I would say. So 2024, expect multiple exams, multiple new features coming out, and, many exciting things going on for logic at this point.

Cool. [00:25:00] Abbott Soran, CEO and Co-Founder Retinal Logic. Thank you so much for being on the show.

Amazing. Thank you for having me. And yeah, have great day as well.

Yep,

you too.

I.

That's our show for today. Thanks for listening. Visit TalkingAboutGlaucoma.com for more details about each episode and how to get more involved with the show, including receiving future newsletters or becoming a guest or sponsor. Please rate this show on your podcast player of choice and tell your friends about it. Keep informed to prevent needless loss of vision from glaucoma. See you next time on talking about glaucoma.​

Abdullah SarhanProfile Photo

Abdullah Sarhan

CEO and co-founder RetinaLogik

Abdullah Sarhan, PhD, is currently the CEO and Co-founder of RetinaLogik, a Canadian-based startup leveraging the power of AI and VR to enhance access to vision screening for everyone everywhere.

Dr. Sarhan holds a PhD in Computer Science from the University of Calgary, specializing in vision science, particularly in glaucoma and machine learning. His academic journey also includes earning an MSc from the same university, focusing on natural language processing and machine learning. Dr. Sarhan has more than a decade of experience in data science and software. He has contributed to various peer-reviewed journals and conferences in high-ranking venues. Additionally, Dr. Sarhan delivered numerous talks, workshops, and courses locally and internationally related to data science and healthcare. He received multiple awards and grants to support his research, including the Killam Award. Dr. Sarhan has also won several awards for teaching at the University of Calgary.

Driven by a passion for translational research, particularly in utilizing technology to improve healthcare quality and access, Dr. Sarhan brings a unique perspective to the intersection of technology and healthcare, especially in vision. Beyond his professional pursuits, he is a self-declared "chef" and enjoys spending his spare time listening to podcasts.

Selected publications:
https://scholar.google.com/citations?user=f8cF3aYAAAAJ