There are all sorts of philosophical discussions to be had here, but we can sidestep these and take a more practical approach. Specifically, we can choose to care about Predictive Power and Explanatory Power.
Predictive Power is the ability of a given theory to allow us to make predictions about the natural world. We know that Newtonian gravity is an approximation (to General Relativity, at the very least), but it's very good at predicting where the planets in our solar system will be. This is really a flat-out practical consideration - if a theory can't make predictions, it's not very useful (and some would argue it's not even science).
Explanatory Power is the quality of a theory that gives us some deeper understanding of what's going on in a physical system. For example, knowing about atomic electron orbitals allows us to make sense of the periodic table and chemical interactions. It gives us ways to develop other theories.
So, what we're looking for from a scientific theory is the ability to make predictions and for some explanatory insight as to why something happens, so that we can use that insight to develop further theories.
This is also relevant for statistical modeling (and hence data-intensive science), because we can build our models to address either or both of these. Predictive algorithms such as neural networks can perform very well, but the structure of the model is often hard to interpret in any kind of explanatory way. Conversely, a linear regression model might tell us a lot about which variables are important, but may not make good predictions. Ideally, it would be nice to build models that are useful for both prediction and explanation.