Christophe Pettus lays out an argument:
If your engineers have told you that you need a data lake, you should be a little suspicious. Most organizations that build data lakes don’t need them, and a substantial fraction of the ones that do build them end up with what the industry — without any irony — calls a “data swamp.” So before we get to what a data lake is, let me say plainly: the right answer is often “not yet, and maybe never.” The interesting question is when “yet” becomes “now.”
I think my level of agreement is about 80%, and I’m glad that Christophe anticipated my “It’s really useful for data science work” argument. If the large majority of your data is relational in nature, then yeah, a data lake seems like overkill. And most of the time, I see companies taking that lake data and then organizing it into a warehouse later.
I’d say the biggest downside to relying on a data warehouse is the latency of requests. I need to get some dataset that includes columns A, B, and C from a table in the relational database but I’m not 100% sure that I really need A, B, and C because I need to train a model first or otherwise work with the data in some significant way. The OLTP DBAs don’t want me writing large-scale analytical queries against this data because of the performance implications. The BI developers/DBAs give me a turnaround time in months on this data, and if it turns out I don’t need it, they’ve wasted a lot of time for nothing.
That kind of scenario, in my mind, is what compels people in organizations to push for data lakes or something similar.