My Tableau story attacked a very big question: Is Pitchfork Biased?
My hypothesis was… probably.
My final answer… I think?
Overall, I was very surprised that somehow Pitchfork managed to have pretty consistent averages of scores across the years and genres. However, I uncovered a pretty obvious biased which was that Pitchfork overwhelmingly reviewed and rated rock albums. Now I wouldn’t say this is entirely their fault since a lot of music is generally categorized under rock. It was tough seeing over 18,000 albums segmented into nine genres. There was opportunity for some distinction as some albums fell under multiple genres but then that just made my dataset a little messier and harder to combine.
One interesting point I thought I found was arrived at with the use of a formula in Tableau. I calculated the percentage of albums by genre that received the Best New Music rating. Experimental albums were the mostly likely to get the honor at 6.722%. I argued that this is the case due to the innovation that is inherent to music that is categorized as experimental. However this argument isn’t as strong as I thought it would be when pop/r&b is second most likely to get BNM and rock as third. Yet this is where Pitchfork’s biased is… they overwhelmingly review rock albums. There is so much more room for data to be variable.
This is where my Big Query comes useful. I found that really popular artists and a lot of rock albums received scores lower than 3. Some even got a 0… now that is harsh Pitchfork. People have complained that the publication changed its position on a lot of artists after they got popular… or when their popularity declined. Does content even matter anymore? I hope so.






The site map above was drafted during the last class. I was intrigued by hierarchical structures due to their pervasiveness in our world. It is commonly used in a lot of websites I use as well. Even 