I don't think tooling can always replace best practices. So I think there's always a mix of kind of just learning how you should think about the tools and then also, but also the tools kind of giving you the default good approach. You know, there's, there's been a lot of talk of, kind of data-based ML and data-driven machine learning.
I think a lot of those tools are pretty promising where, in a lot of applications, the model definition code, at least let's talk about, kind of, going from no model to having a model rather than iterating on an existing model. So when you're building the first model for the vast majority of cases, it's pretty cookie- cutter what you're doing.
And if you're doing tabular data, you might throw an XGBoost at it. Maybe that'll change in the future. If you're doing, you know, computer vision, you're going to transfer learn from something. If you're doing texts, you're gonna transfer learn from something else. They're like pretty well understood.
And not that they always give you the best performance, but would do at least a reasonable baseline. And so that part, I feel like for a while, we spent a lot of time building tools for that, but now we've kind of understood that the tooling is really useful for those things once you get into the sort of like end to end plus one iteration cycle. You know, you've worked on your ad model for years and then it gets really useful. But initially not as much, since what matters so much was looking at your dataset, iterating on your dataset, like adding examples, removing examples, adding features . I think having more tools that do that would be really useful.