So currently a lot of the industry is focused on the annotation that happens on the training data before you start training your models. This is where a lot of the inquiries that we get are still coming from. A lot of companies are still at that stage where they're collecting their training data.
If they have too much data, they're trying to curate it and they're trying to annotate it. So a lot of them may, especially in the early stages, because we work with a lot of startups, they may be changing their taxonomy. They may be trying to define, okay, what classes do we wanna label, how do we wanna label them and so on.
So this is where we come in very frequently. And we just annotate that seed training data set that companies are using. There are also some companies that maybe have a pre-trained model they're trying to optimize and automate a lot of that process. So they just want us to validate the results of some type of pre-annotation that they're doing on their end. So that's another stage that after there is some kind of a basic model that they've trained we can come in and just validate those outputs. Very frequently this can save time and effort, but sometimes it can also create additional effort because if the pre-annotations are really bad, what we have to do is actually take even more time to correct them and then submit the data. So sometimes companies think that, you know, pre-annotation can save them a lot of money, but it actually creates more work for companies like us. So it really has to be, you know, if you're trying to do pre-annotation, your model has to be quite good.
And then once the model is in deployment we are working with some companies in order to provide live monitoring of their systems. Frequently. These are high risk systems that require an additional, like second layer of human monitoring and auditing. And for some of them we're actually looking at live streams of data in order to correct them, all the responses in real time and also handle alerts in real time.
So this is a new type of service that we are exploring currently, and it's something that's really promising for us. And I'm very excited about this. Because it also guarantees more reliability and trustworthiness of these systems. And I'm trying to also promote it as a best practice, especially for hybrid risk systems.
And then of course there is the post-deployment let's say auditing of these models. We're not doing enough of that yet. But I think a lot of companies may still be just trying to figure out their entire pipeline and how to create this continuous deployment, continuous training and continuous auditing and improvement of their models through this type of annotation. So essentially that would mean that we verify the model's responses, even if it's not in real time. Maybe just some of that data that we generate is used to retrain the model and to improve it for the future.
So these are kind of the, the different stages across the entire life cycle where we can be plugged in.