I have a background in real estate and data analytics so maybe I can shed a little light here.
It's important to remember the context of what you are looking at. A Zestimate is an output from a machine learning algorithm. It is not a broker estimate. There are many pros and cons to this.
Pros:
--A computer can parse and aggregate millions of rows of data in seconds. Imagine if you asked your broker to do a CMA analyzing every single house in your zip code on 100 different data points. How long do you think it would take your broker to do that? Weeks? Months? That's what the Zestimate is doing for EVERY house in the country, every single day.
--A computer can change complexity of a question without much sacrifice for time or efficiency. (Using the example above) now let's say after your broker had taken those data points on every single house there is that you only want a CMA done using 3/2s with a two-car garage. Your broker would likely need a few more weeks to manually check off which houses he had already looked at that met that criteria and then re-calculate the value. An algorithm can make this change in seconds.
Cons:
--An algorithm lacks the "finer touch" of a human being. The computer can parse more information, but it can only really look at what data is readily available to it (namely macro level trends). A computer can't parse out things like condition (house A has granite countertops while house B has formica), location (none of the neighbors on house A take care of their lawns, but all of the neighbors for house B take perfect care of their homes), or even "personal details" ("the seller is desperate to move and will probably sell for under asking if you close immediately"). That's the value of a human touch.
--An algorithm will generally try to fit a distribution as a whole rather than predict each of its parts individually. Say there are 100 houses and you plotted them all on a graph. In regression, you would try to find the continuous line that goes through all 100 points while minimizing total error from the line to each point (hence why in regression it's called the "line of best fit"). The idea is to find a line that could scale to best predict future houses that would join that distribution.
If an algorithm tried to make a prediction by focusing on each data point individually, then it would come up with a model that is far less complex, more biased, and won't scale well to future data points. This is known as overfitting the line. For Zestimate, it'll predict houses it already knows about perfectly but won't be able to predict unknown houses very well.
This picture is kind of what I'm talking about. The one on the left is more like the Zestimate and the one on the right is the biased, overfit model...
So the Zestimate is not designed to be a spot-on estimate of your house. It's designed to predict with relative certainty based on the distribution of every house in your market. In other words, the Zestimate is designed to predict every house "pretty good" as opposed to predicting one house "perfect". In that sense, the Zestimate is very good at predicting values (and from an innovation perspective, downright extraordinary). However, it's more of a jumping off point than anything. You should use it as a starting point in your decision making process, but you should not use it to base buying decisions off of without a closer look of your own.