• Midstage Hypergrowth
  • Posts
  • #088 Clarity CEO & Co-Founder Michael Matias: It’s All About Being on the Court

#088 Clarity CEO & Co-Founder Michael Matias: It’s All About Being on the Court

In partnership with

Clarity CEO & Co-Founder Michael Matias

Show Notes

Modern technology can give us so many great things, but it also creates new challenges. Over the last year, deep fakes have emerged as one of those great challenges. Deep fakes are a form of generative AI that aims to deceive and manipulate people with things that aren’t true. Luckily, a startup like Clarity.AI is working to address this problem. Clarity serves prominent news outlets, law firms, and other organizations to verify and authenticate audio, video, and images so that people know what is fake and what is real.

Clarity CEO and co-founder Michael Matias joined startup coach Roland Siebelink on the latest episode of the Midstage Startup Momentum Podcast to discuss the problem of deep fakes and how Clarity is solving the problem. They also discussed a variety of topics relevant to Clarity’s journey as a tech startup.

  • How Clarity approached its move from having a security solution to having a product.

  • Approaching the challenge of having so many potential use cases.

  • The importance of action-based learning and being “on the court.”

  • How startup leaders cope early on when their product is faltering.

  • Behavior patterns that are found in successful startup leaders.

Transcript

Roland Siebelink: Hello and welcome to the Mid Stage Startup Momentum podcast. My name is Roland Siebelink and I'm a coach and ally to many of the fastest growing startups around the world. And that around the world we take quite literally today because today we have joining us all the way from Tel Aviv, Israel, Michael Matias, the founder and CEO of Clarity. Hello, Michael. 

Michael Matias: Hi. Thank you for having me here. 

Roland Siebelink: Of course. It's an honor. I've heard a lot about Clarity, and you just raised an amazing seed round with Clarity, so we just had to have you on the podcast. But for those that haven't heard of you yet, what do you do, whom do you serve, and what difference are you making in the world?

Michael Matias: We're Clarity. We are helping protect organizations, enterprises, and others from the threats of deep fakes. Deep fakes are this big and new emerging threat where anybody can use artificial intelligence - and what today we now call generative AI  - to impersonate a person's voice, face, likeness, and make them do and say things they never did and never said.

This is becoming a highly emerging threat on a security level, on a trust level, on the media itself with many different growing and emerging use cases. We get to work with some really important organizations to help them protect themselves from deep fakes but also help protect the integrity of the information they put out to the world to make sure that it is verified and authentic.

Roland Siebelink: Okay. What does that mean, really important organizations? Are you working with media organizations? With political organizations? What exactly is your typical customer base? 

Michael Matias: We're working with some really large news outlets, definitely brands that you know and are familiar with. And what we do with them is we help them verify the news as it comes out.

Consider all the different videos and audio clips that are being used either in the research side of news gathering or when actually projecting the news to the audience. We work with the editorial teams to help them verify their content. But we also serve as a security software for different enterprises who are using media in the day to day. 

For example, video communication - exactly what we're doing right now. This is a new way for attackers to impersonate employees and executives in real time voice and video. We serve as a security layer for this real time interaction to help preserve their integrity of the communication.

And lastly, we work with very big law firms in helping them verify and authenticate the legal evidence that they take to court. Consider all of the digital evidence - video and audio - that are being presented in court cases, which are growing in numbers. I believe about 70% or 80% of court cases rely heavily on digital media today. Those are really big problems to tackle. 

Roland Siebelink: Excellent. Okay. How did you even get to this idea? It sounds like you guys were working on this already before the boom of generative AI started catching the public eye. What was the origin story? 

Michael Matias: I think it's always an interesting question of where do entrepreneurs get their calling to come and do what they do in their context. For me, I've been in cybersecurity for many years. I spent about five years in the IDF as a cybersecurity officer. I got to lead AI teams there. I was always very passionate and excited about this intersection of AI and cybersecurity. 

And then when I went to Stanford, I dove deeper into artificial intelligence and generative AI a little bit before it was called generative AI. But I was always very much thinking through what problems could this present or what opportunities now could this present to the world? I got to actually dive into deep fakes quite a bit myself in that context - also a little bit in the context of political science and democracy elections. And it became very clear very quickly that this is going to be one of the biggest challenges that we will need to adapt to. I have no doubt that we have a bright future of helping tackle misinformation as an ecosystem and AI based cyber attacks. 

But for me, it was a very personal experience of seeing the technology firsthand, seeing what it can do, and then drawing the dots of where we're heading and saying, “Wow, in a few years, we're going to be in big trouble if we don't have really robust systems.” And that's what we set out to build. 

Roland Siebelink: How urgent is this deepfake problem? Because when you read mainstream media such as The Economist, it's always put forward as in the future or it might happen but then again, it's still relatively easy to figure out socially.

Can you give us a concrete example of where deepfake has done real damage - if you can come up with an example and what did society or what did you guys do about it? 

Michael Matias: Absolutely. First of all, I think that you're right in the sense that the perception is that deep fakes, in general, are a futuristic concept. And that was true, I think, all the way up until six months ago. When we started the company, we also looked at deep fakes as something that is going to come. It was just a question of when and where. A lot of people thought it was going to take years for them to appear. We were more bullish on that. They were going to appear in the matter of weeks and months. And I think that's really what happened. 

If you actually look at the numbers, in 2023, there were 10 times more deep fakes than in 2022. This shouldn't surprise us too much because if you actually look at the number of AI generated images in 2023, there were 15 billion AI generated images to our knowledge in 2023. That's more than the number of photographs that were taken in the first 150 years of photography. That just shows a little bit about the growth. 

It's all about the rate of change. Deep fakes are growing exponentially more and more, and we already have concrete cases where they're impacting public opinion at a very large degree. For example, in January, right up to the primaries you had Joe Biden calling 40,000 people in New Hampshire to tell them not to vote. That was a robo call with Joe Biden's voice. It was not him. It was a deep fake of him.

The role that we played was getting to work with some of the great news outlets, we got to help quickly verify that this is indeed a deep fake, and it was later reported quickly on Bloomberg that we were a part of the analysis process and that I think made a pretty significant impact.

Roland Siebelink: One more question about the product before we move more to the go-to-market. I do want to ask since you're based in Tel Aviv at the moment - offices in Tel Aviv and New York, as I understand - you are close to or in a war zone yourself at this point in time. Is the political implication of a war going on there also that you see a lot more pressure for deepfakes to come up and for people to refuse to believe them in a sense given the polarization of society around these issues?

Michael Matias: Yeah, we saw something really interesting in October when the war just broke out between Israel and Hamas where all of a sudden, it was brought to the public opinion this notion of “Can I trust these images or these videos that are circulating online from the war zones”? And we were confronted with those questions very, very quickly in a matter of hours into the war. In fact, just that morning, I remember I was in Tel Aviv that day. I remember at 7:00 AM - the big intrusion of Hamas into Israel happened at 6:30 AM - and at seven, I was getting videos on my WhatsApp from friends and family asking me, “Hey, can you check if this is real? We don't believe it.” 

And I remember I showed that to my girlfriend. I told her, “There's no way this is real. This is either a deep fake or this is something taken from a different war.” 

Unfortunately, it turned out to be very real. And what happened over the next few weeks and months is you saw effectively the entire world engulfed in these questions of can I trust this video of these rockets hitting a hospital or can I trust the videos of the rockets going from Gaza to Israel?

What we felt is that this is really a moment of climax for the world and the turning point for the larger society asking this fundamental question of, “Can I trust what I'm seeing? Can I not?” And for us, it was obviously a big catalyst for our work, whether it be with governments or with news agencies, et cetera. And since then, the use cases just kept on coming. 

Roland Siebelink: Okay. I did want to move a little bit from the product offering itself, which I think we discussed quite well, to the go-to-market. As with many products that come more from a technological background, often it's a challenge to turn that into a revenue model. 

How has Clarity dealt with that challenge? How many iterations did you have to go through? How did you actually land on something you can sell to investors? As we all know, ultimately, they're more interested in building the business and revenue streams rather than just the products.

Michael Matias: Yeah. Obviously, there's different types of startups and there's different types of products. Some of them are more inclined to be deep tech than others. And some of them are more reliant on strategic business models. 

We are operating in the deep tech space. The barrier to entry for anybody to try and do what we do is very, very high. In fact, we employ more than a dozen phenomenal engineers and researchers that are best in class at what they do. And we're very, very serious about the research and the engineering work that it takes to detect deep fakes. 

And it's no surprise, you see pretty much the entire world talking about how difficult it is to solve this problem. Yet, it must be solved and it must be tackled. We're very proud of the work we're doing. 

When it comes to the product and go-to-market, it really does change between industries. We realized that deep fakes are a horizontal problem for many, many different industries. And then it becomes a question of where's your focus at? And then how do you now productize our security solution, whether it be for news outlets, for lawyers, for enterprises and fintechs specifically. And then each product has its own world. It has either a subscription business model or a usage based model like that of tokens of open AI. And there's different pricings based on the comprehensiveness of the analysis you need to do because it also turns out that different customers and different use cases require different comprehensiveness and different timeliness of when you provide them the output. 

And deep fake detection is just a part of that because it's not just the algorithmic solution, it's actually the whole workflow of how do you integrate into their ecosystem and you empower them to continue relying on that same information they've relied on until now. That requires a lot of product sophistication and there's a lot of nuances to that. And once you get in, it becomes pretty clear how much it's worth for the organization. But that also changes over time. As deep fakes become more and more apparent and more and more threatening, a solution like this is going to be more critical and will also require higher pricing. There's going to be more scarcity, et cetera. 

Part of that is due to the fact that the technology has to improve. The algorithms have to be better. There's going to be more effort put into them. They're going to take up more computation power. There is going to be a need for more researchers to get their work done. It's a really dynamic field. I think our job as a startup is to be able to be there wherever and whenever we're needed and to be able to ride with that wave to grow with the market. And I think that's the big promise that we bring to the table. 

Roland Siebelink: Okay. That sounds good.

Michael, many deep tech companies, in particular, have that challenge of all these go-to-market possibilities, all those use cases, all those customer segments. It can feel quite daunting to try and tackle them all. What I've seen is two different approaches. One is let's just focus on one, maybe two for now, and all the rest will come somewhere in the future.

The other one is let's try and be more upstream where we work with partners that each customize our generic products for their specific vertical. Have you guys given much thought about those different strategies or have you come up with a different one? How do you tackle this problem of having all these use cases and how to cover them?

Michael Matias: We're seeing success in looking at both and appreciating the fact that we do have substantial generic technology, which is suitable for many, and being able to prioritize that to channel partners who can then customize it for their needs. Obviously, over the last decade or so, the world has become accustomed to new software solutions coming to the market, which provides some proprietary insights and capabilities and then being able to customize them. 

And then when you look at the actual verticals, you certainly have to focus. I mentioned those three larger verticals that we're operating in. I could name 12 more that are banging on the door. Focus is always a question of how do you take the big funnel that you have and you distill it down to where you actually want to be given your capabilities.

We’re very focused and serious on being able to measure what is the opportunity level today in April, 2024, in each of these verticals, keeping an open mind that these opportunities shift over time and different markets have different times when they ripen based on market conditions, based on everything that's going on. Obviously, regulation plays a very large role in our space. 

We are very focused on the three verticals I mentioned. We're also looking at how to bring our generic solution to the market. And we have some really great partners there. And we're very proud now to be part of the Intel ecosystem and the Deloitte ecosystem and to bring our solution to market in those regards. But we're still at a very early stage. It's the early days. We'll continue focusing as we go along. 

Roland Siebelink: You said that one of the drivers for the big momentum also leading to your seed round was some of the deep fakes that started to circulate in public and be covered in mainstream media. What have been some of the other factors that you've been gaining momentum on, whether it was actions you took inside the startup? How did you drive for that momentum: faster growth, more traction with investors? And what can other startup founders learn from that? 

Michael Matias: I firmly believe in action-based learning. It's all about being on the court. The more time you spend on the court, the more time you try to productize, commercialize, meet the market, meet the customers, and see how people actually use your product and give you feedback; whether that's internally at first or externally.

It's about creating an ecosystem around this vision and this idea and putting it out there as much as possible and testing it out. I think that it comes as no surprise that initially most of the feedback is this sucks; this doesn't work; this is not what I need; This is horrible. And over time you ask, “What would you need to see differently in order for this to be impactful for you?” And then you start learning what the market really wants. I think a lot of people spend a lot of time romanticizing about what they think the world needs and what their customers need. And then they spend a bunch of time and money building it out only to discover that, “This is actually my own view of the world. This isn't what the world actually needs.” 

We, from the very early days, were very outward and very much focused on - we understand where we're at with deep fakes and we have our own knowledge that we're building, but besides this first layer of information that we've gathered and analyzed, the rest needs to come from the market. And I think the fact that we took that very proactive approach and learning from the real world, I think that's part of what got us this momentum. It got us introduced to the most amazing people. It got those people to see that we are fast learners, that we prioritize execution, speed, and velocity. And I think that's what ultimately makes for a successful product because the first dozens of iterations will probably not work; very rarely they do. And it's all about, can you get to that iteration which is good enough that will actually get you market momentum before your money runs out. And that's the game that entrepreneurs are making. 

Roland Siebelink: Yeah, love it. I have two questions about that. The first is, as you say, you come out and the first 10 times or so, people will say your product absolutely sucks. Psychologically, how do you deal with that? How do you keep going? 

Michael Matias: I think that it's all about setting expectations, right? If I were to work for 10 months on a product and then bring it out and then hear from a customer, this is horrible, I would be devastated because I would look and say, “I just spent 10 months on this and it's not what I." But if I spent two days building a prototype and then I come and sit with the customer and they say, “Oh, this is not good, but here are three things you can do to make it better for me.” And I can come back to them two days later. I now experience my work as an iterative process rather than a romanticized building of something that's picture perfect. I treat my work as an experiment. 

Roland Siebelink: The second question I had based on what you were saying before is being on the court, taking feedback to heart, rapidly iterating, that is much easier to do when everything is still founder led than when the company is starting to grow. Any thoughts on as you start adding more people, how do you maintain that culture of being so close to the market, not getting too invested in solutions too early before you've gotten feedback and all the other things you were saying? How do you spread that culture? How do you keep people on board with that as you grow? 

Michael Matias: I think that for me it's about creating a culture where everybody embodies that mindset of being on the court. If I managed to create a group and a culture where the people that surround me are also prioritizing being on the court, then the conversations we're having and our collective experience embodies more of that that happens on the court rather than what happens in our own minds. And then it's an exponential booster to my own ability to be on the court. It's almost like an extension of me being there. 

I wish for us that even when we'll be a 100 people, we'll still prioritize being on the court and I'll be able to be on the court in transitive property through employee number 100. I'm really looking at how do we make sure that we are prioritizing our experience as that of what's happening in the real world versus what's happening in our own minds.

Roland Siebelink: I just wanted to ask you as well, Michael - since you've also been investing in quite a few companies for a while now - to the degree that you've interacted with all those different founders, what have been some patterns that you've seen where founders behaviors, personalities, ways of operating actually did drive the startup to success or - without mentioning any names - where they didn't and maybe failed because of some of the leadership abilities of those founders or lack of those.

Michael Matias: I think probably the number one trait that I've observed that has been most impactful or most patternizing for success is being on the court. For me, being on the court, it has two dimensions. It's first the obsession of getting out there and seeking reality. But it's also how fast you can do it, how fast you can iterate on that court.

It can be how many customers do I speak with? How many engage? How many demo calls do I take? How many people do I engage with to hear their opinions? How many researchers do I talk to about my vision for how to solve this problem? It's that obsession of being active outward and sharing that obsession with employees. Because I think that then translates to every part of the company from the research - you can do research by saying, “I'm going to go and build these deep neural networks myself because I have this idea for how it should work. And I'll take what I've learned until now and I'll implement it.” 

But there's an entire world happening out there where this world keeps shifting and where new ideas are coming up and you get - I almost see the world as this big amusement park filled with information and data, which is readily available. And those I think that are not actively trying to capture that are missing out on some really big opportunities. 

And I think that when I look at the companies that I've invested in, which were successfully acquired or IPO or had really great work, it's really entrepreneurs and founders that prioritized this iterative process of trying to fail. If an experiment felt that it would take too long, then they often de-prioritize that and they prioritize those experiments which were much shorter and taught them the most. 

Roland Siebelink: Just for the benefit of our audience, can you contrast that with the kind of CEO who does something different? What other types have you seen in companies you never invested in to start with. What's the kind of anti-pattern, the kind of CEO founder that you would avoid? 

Michael Matias: I think an anti-pattern to this is somebody that has an idea - and by the way, I think that that idea can be brilliant and that CEO may have the right idea, the right problem and the right solution - and then they go into work mode where they work internally with their team to build the product and they have internal conversations on what this should look like, they keep a lot of things in house for a long time, and then when they meet the market after several months, they realized that they were a little bit off track. And at this stage of the company, every degree of being off track is a big deal. 

If you go a certain direction and you spend five months building it out, then if you were to shift even a little bit, you would end up in a very different location. If you work in iterative processes of two to three days, even if you're off by several degrees, the amount of change that happened was hopefully not too big that you were able to shift and pivot and dance your way through.

I do see a lot of entrepreneurs that have a great idea, they have a great team, they raise great capital, and then they take that idea to productize for many months, only to realize that the market has changed or their solution doesn't work or it doesn't actually solve their needs.

Roland Siebelink: Michael, if the audience made it this far in the podcast, what can they help you with? What are you looking for? What kind of connections are you looking to make? How can people be of help? 

Michael Matias: If anybody's interested in hearing more about deep fakes and how they may impact their own industry, obviously reach out to me and I'll be happy to connect you to the relevant folks.

My email is [email protected]. Or you can visit our website, getclarity.ai. And we're looking for the right companies who are looking for really great solutions today and tomorrow and for the future. If you're listening to that and this is you, then please reach out. 

Roland Siebelink: Yes. And of course, I'll be happy to make an intro as well now that I know, Michael. Are you having jobs open at the moment? And if so, what profiles are you looking for in particular? 

Michael Matias: We're hiring across the board of our engineering. Anywhere from deep AI research to machine learning, engineering, and MLOps to full stacks. We're also hiring for sales and for marketing, pretty much all positions. Just as much as we're hiring for positions, we're hiring the right people. People with the right mindset of being on the court, of living in reality but executing fast and having some great ideas to test them out in real time. That's what we're looking for. 

Roland Siebelink: Okay. The secret formula to get a job with Clarity was revealed right here on this podcast. I'm only half serious at this stage. 

But thank you so much, Michael, for joining this Midstage Startup Momentum Podcast. This has been an amazing interview and I wish Clarity to become as big as you anticipate and even bigger than that. Thank you so much once again. 

Michael Matias: Thank you so much, Roland. Thank you. 

Roland Siebelink: And for all the listeners, next week we'll have a different episode with a new amazing founder from anywhere around the world. Talk to you then.

Learn AI in 5 minutes a day.

The Rundown is the world’s largest AI newsletter, read by over 600,000 professionals from companies like Apple, OpenAI, NASA, Tesla, and more.

Their expert research team spends all day learning what’s new in AI and gives you ‘the rundown’ of the most important developments in one free email every morning.

The result? Readers not only keep up with the insane pace of AI but also learn why it actually matters.