A famous Reuters dataset from the 1980s includes “Blah blah blah.” in place of some stories. Why?
Show Notes
- 00:31 – The link Jess sent
- 8:31 – SGML
- 8:46 – This is what the blahs look like and this is what all the entries look like.
- 24:00 – FTP
- 24:34 – Linguistic Data Consortium
- 29:00 – RCV1 at NIST and David D. Lewis’s README
- 30:22 – Construe-TIS: A System for Content-Based Indexing of a Database of News Stories (Phil Hayes and Steven Weinstein)
Adrianne: Hey everyone.
John: Hey Adrianne.
Regina: Hey.
Billy: Hey.
Adrianne: We got an email from a listener and I called dibs on it, but I think everyone read it anyway.
John: Sorry!
Adrianne: Does someone want to read this email?
John: Here it is, a website contact form message from Jess.
Adrianne: That’s the one.
John: Why does Thomson Reuters newswire say “blah-blah-blah”? Reuters-21578, with a link, is a dataset containing Reuters and newswire items, short businessy headlines and descriptions from 1987. It’s very popular for machine learning research because it’s extensive and well labeled. For some reason, some articles in the dataset have article bodies containing only the words, blah blah blah. How did this happen? Was it in the Reuters database? Or did the academics they worked with introduced it? Why blah blah blah instead of just leaving the article body blank?
Adrianne: I am very into this because I love machine learning training datasets. It’s like this essential distillation of humans and computers trying to communicate with each other and I just think it’s really lovely. So I wrote about this back at the Outline and I’m just going to read from this story, which is from 2017 so, I’m going to quote myself here.
John: Nice.
Adrianne: “As machine learning research accelerates, scientists have started pooling their resources. ImageNet is a popular data set produced by researchers at Stanford and Princeton that contains 14 million images grouped by nouns in synonym sets such as “kid, child,” “woman, adult female,” “office, business office.”
So ImageNet is one of these many publicly available data sets made by corporations and researchers and released for free online for others to use in training algorithms.
John: Training for what?
Adrianne: To train machines learning algorithms. So this is like what would end up in an app like a face tuning app or language translation. Anything that involves using a lot of data to try to emulate some kind of more humanlike function with an algorithm.
John: It’s like someone who says kid might actually mean child, or it might also mean child?
Adrianne: Exactly.
John: Okay.
Adrianne: And also associate that with the image. So you’re just trying to teach a computer like basic s**t that people learn by the time they’re five.
Regina: Dumb computer.
Adrianne: Yeah, what idiots. This can be done for basically any type of data, as long as you can get a whole lot of them and label it somehow consistently.
So these datasets are all called corpuses and there are tons of them. The dataset that Jess emailed about is a relatively small one by today’s standards. It is a text corpus and it’s called Reuters-21578.
John: Catchy name.
Adrianne: Yeah, I know. It’s 21,578 Reuters articles.
Regina: So it’s also a very creative title.
Adrianne: Yes, and these articles are labeled with topics. So those topics might be financial or economic, like mergers and acquisitions or interest rates, or they might be labeled with a proper noun, like a person or a country or region. And this data set is available for free online.
John: How do you access this?
Adrianne: The place that Jess was looking at was UCI, University of California Irvine, has a machine learning repository that has a bunch of datasets.
John: Okay.
Adrianne: She said that she was downloading this dataset not for her job but for a personal project.
Jess: I’m actually a designer. I work with data, but I’m a designer and I wanted a cool news dataset that I could use. I liked the retro quality. I started going into it in a little bit more detail and found all these amazing instances where the entire article body of certain things was just the phrase, “blah blah blah” and knowing that Reuters is very…
Adrianne: Straight-laced?
Jess: Straight-laced with lots of journalistic integrity. I couldn’t see that as being intentional and in any sense in the journalist news sense.
Adrianne: Jess is actually pretty qualified to say what a Reuters employee might do or not do because she happens to be a Reuters employee. However, she’s in a different department and she was very clear that she has unfortunately no ability to help us get this answer institutionally.
John: What do you mean?
Regina: Interesting.
Billy: Put up a bulletin in the cafeteria.
Adrianne: Jess looked through the dataset and found 1,605 articles with blah blah blah in the body.
Jess: It always seems to be the full “Blah blah blah”. It’s “Blah blah blah.” with the first B being capitalized and a period at the very end, so it’s punctuated, it’s not just…for what it’s worth, it is like a proper “blah blah blah” it’s a statement in and of itself.
Adrianne: Very intentional looking.
Jess: Yes. Yes.
Adrianne: This is a pretty solid little dataset. It is in 7,600 papers on Google Scholar.
Regina: And did the Google Scholar papers mention the Blah blah blahs?
Adrianne: Yeah, a couple of them do. If you search Reuters-21578 and “blah” in Google Scholar, you get 35 results.
One paper is talking about the limits of the data set and says “This collection is also disputed in reason of the famous blah blah blah. Another says, “Of course we have omitted the body text having only blah blah blah like sentences” another paper refers to “Dubious documents containing just the words blah blah blah in the body”. And then one paper speculates that blah blah blah was inserted deliberately by the datasets creators as “noise” to “test the tolerance of classification algorithms”.
There’s no evidence for this theory or for any other theory at this point, but this thing, this quirk of this dataset is definitely out there. Anybody who looks closely at the data is aware of this phenomenon.
Jess: It’s a mystery that’s been bubbling in minds for 30 something years now.
Billy: I would just like to note that there was an Iggy pop album named Blah-Blah-Blah that came out in October, 1986 and was released on cassette in 1987.
Regina: But what was the capitalization and punctuation?
Billy: Uh, it’s actually different. They’re all capitalized and there’s dashes between them.
Adrianne: Not the same.
Billy: Yeah. It kills the theory.
Regina: Yeah, no.
Adrianne: These articles were labeled in the data as type equals brief, suggesting that they were news alerts or headlines that were sent out for a developing story where the body was filled in later, or maybe it’s just the headline and that’s the whole thing.
Regina: If it’s the whole thing or just the headline, why would they type in Blah blah blah?
Billy: Right. Wouldn’t it be the story developing…
Regina: Right, exactly.
Adrianne: I don’t know. Okay. I’m going to send you all a sample of the data. So this is what the type equals brief articles look like. The longer articles we’ll also have a dateline with the date and location and an actual body. I just put this into Slack.
John: Do any of these tags even close? It’s weird. It looks a little bit like HTML, but it’s not HTML.
Adrianne: This is SGML.
John: Oh, what’s that?
Adrianne: Standard Generalized Markup Language.
John: Oh.
Adrianne: It’s a document markup language originally designed to enable the sharing of machine-readable large project documents.
John: I guess if you’re listening to this on a podcast player where you can see the show notes, look for a link to this cause it is illustrative and I don’t really know how to convey this.
Adrianne: Can you explain what you’re looking at?
John: So we see this title tag. Inside the title tag, you see the title of an article, and then that title tag gets closed like an HTML tag would be closed. So it’s structured like HTML, where there are tags and inside those tags is content, but the Blah blah blah, sits outside of those tags. It’s not enclosed by anything. Yeah I see what you mean about this not being formatted like the rest of it is.
Adrianne: It’s in a weird spot.
Regina: I don’t know. I think it has to have served a purpose. Like, I really just want to know what the purpose was.
Adrianne: Yeah, well, a bunch of people worked on this dataset. A lot of them are on LinkedIn, so I’m going to see how many of them I can track down.
John: I think you can do it.
Adrianne: Thank you, John, for your vote of confidence.
John: Coming up, Adrianne does some research, asks some questions, blah, blah, blah.
Adrianne: I’m back.
Billy: Hey!
Regina: Welcome back!
Adrianne: I’m back from field reporting on LinkedIn.
Billy: Wow. Did you have a hard time getting people to respond to you when you reached out with the subject line, Blah blah blah?
Adrianne: I did, actually, funny.
Billy: I would imagine.
Adrianne: It was “podcast query, Blah blah blah.”
John: Oh, my god.
Adrianne: I did get a surprising number of people responding.
Billy: People who worked on this in the 80s?
Adrianne: People who worked on this in the 80s, yeah. Well, let’s get into the dataset sausage making.
Billy: Okay.
Regina: My favorite kind of sausage making.
Adrianne: The 80s, you can put some 80s music in here, maybe.
[John makes weird laser sounds]
Billy: A what?
Regina: I’m sorry, what?
John: Those are the sounds of synthesizers.
Billy: Like a little kid using a phaser.
Adrianne: Reuters is sending out a huge volume of news. Subscribers are wanting to get updated on specific topics or specific regions or companies, so Reuters editors would manually add topics to each story as it came across their desk. And Reuters decided to automate this.
John: In the 80s? Wow!
Adrianne: And a few years later, this dataset pops up on the internet. It turns out it took a lot of people to make Reuters-21578 happen and in the case of the Blah blah blahs, I have basically three different groups of suspects. So first up is Reuters, someone there could have put Blah blah blah into their actual feed of stories, maybe in some kind of invisible way on the backend.
Then there was another group that came in later to clean up the data and publish it for academic use and it could have been them.
But before it got to the internet, Reuters-21578 was in the hands of the Carnegie Group, an AI startup that Reuters contracted with to build this system of news article classification.
Monica: At the time that this data set was collected, I was a programmer, I was fresh out of school, just a few years. This was actually my first company so I didn’t really have a sense of the ways of the world or anything like that.
Adrianne: That’s Monica Cellio. She was a programmer at Carnegie Group and she worked on the system that relied on this dataset and the system was called Construe.
Monica: I actually saw these rooms where they had rooms full of people whose job was to receive a story from a wire and in just a few seconds, scan it, attach tags to it and send it back out. So this was what we were trying to automate, at least the 90% that could be automated.
We were working on that, which meant we needed a pile of data to work with. And we needed to consult, we needed access to their experts, how do you make decisions about how you categorize this stuff? We actually had one of their categorizers working onsite with us as we were developing the rules that the software would use and figuring out the edge cases.
Adrianne: Unfortunately, Monica did not remember anything about the Blah blah blah’s.
The person who alerted me to this sent me some XML showing what these Blah blah blah records look like, and they look like a mistake.
Monica: They do, they are in the wrong place. So we’ve got a text block that contains a title block and the Blah blah blah shows up after the title, I guess. Let me find one that doesn’t have the blah blah blah. So, yeah, the ones that don’t have Blah blah blah after the title, you have a date line block, and then a body block and the ones that have Blah blah blah are missing the dateline in the body and they just say Blah blah blah instead, which is weird.
Adrianne: So Monica didn’t remember anything, but at least she was excited about this mystery.
Monica: If you publish something, please, please let me know, send me a link and good luck! And if you get the answer to the blah blah blah I want to know now.
Adrianne: She suggested I talk to one of the linguists who worked on the Construe project, so I called Peggy Andersen.
Peggy: I had to actually go look up what Reuters 2578 whatever that was cause I wasn’t really aware of what happened to it after I finished working on it.
Billy: It’d be weird if she remembered that specific one, you know,
Regina: I mean, it was infamous.
Peggy: The whole mission of the company was to apply artificial intelligence such as it was back then in the 80s to commercial problems. So, Reuters contracted with us to automate tagging their news stories so that their users could find the story that was of interest to them. The problem was that reporters don’t always use consistent language, so we had to discover the language that they used and their natural reporting.
Adrianne: Today, you would do this with statistics. Today, you would have your program look at all of this data and then find patterns in it on its own, right? But back then they didn’t have the computing power to do it that way so they did what was called knowledge based categorization.
Billy: What’s that mean?
Adrianne: They were writing rules.
Billy: So they have to come up with these rules on their own and then add them.
Peggy: We actually had humans, me and other linguists on the project studying the words that were used and creating rules that they would say. So grain, you know, grains are also traded and we took different grains, you could say grain but if you allowed every single story that had the word grain in it, you’d get some things that were not about grains that are traded, that could be whole grain alcohol, the fine grain of wood or something like that.
Adrianne: Did you realize that this data would be publicly released?
Peggy: No. No. Our goal wasn’t to create data, it was to put tags in real time on news stories so that Reuters readers could find what they were looking for.
Adrianne: Some of the records have just a title and then no dateline, no body and it just says “Blah blah blah.” like capitalized “Blah” lower case “blah” lower case “blah“, period.
Peggy: I can’t explain that.
Adrianne: Do you remember seeing that?
Peggy: No. I worked with software engineers. They’re a special breed of people. The people at Carnegie Group were really some of the smartest people I’ve ever known most of them graduated from Carnegie Mellon. But also playful and they did some crazy things. It could have been introduced then it could have been introduced later on by these people who manage the Corpus once it was released for public use. I really don’t know.
Adrianne: Do you think it’s possible that it was in the original data from Reuters?
Peggy: I don’t know. I really don’t know. That seems unlikely to me.
Adrianne: I explained that we had a listener who had requested this information.
Peggy: I mean, why does this person care at this point?
Adrianne: I think they were curious because they thought that they work at Reuters and they were like, Reuters would never put this in any of his own stuff because Reuters is so, you know, grown up.
Peggy: Yes, exactly. It probably was not initiated by Reuters.
Adrianne: Peggy’s only theory was that it might’ve been an issue where the test program could not accept stories that didn’t have bodies.
Peggy: If you’d find out, I’d love to know the answer.
Adrianne: Okay. See, now you’re curious too.
Peggy: Yeah.
Adrianne: So Monica and Peggy both told me that they did not realize that this dataset was going to be published. Which makes sense because Reuters built it for a competitive advantage, to sell a product to customers. So I started to wonder how this even got into the world.
Adrianne: Dave Lewis is credited as the source of the data in the UCI repository so I figured I should talk to him.
Dave: So, I was a graduate student in computer science, at University of Massachusetts working with Bruce Croft and Bruce called me into his office one day and said “Look at this”. And it was a newsletter from a company called Carnegie Group, which was an AI startup back during the second AI bubble.
They had a graph on the front page of this newsletter which was purportedly comparing an expert system they’d built with a statistical text retrieval system, which is what Bruce and I worked on. We were pretty upset about this because it was comparing apples and oranges.
Adrianne: So Dave is saying that Construe which is Carnegie’s system was being compared with something that his group had done in the past and Carnegie Group was basically bragging about how well Construe had performed versus these other methods. So Dave and his advisor thought this wasn’t a fair comparison.
Dave: There was a lot of debate going on between whether one should use knowledge-based systems for information retrieval or statistical machine learning for information retrieval. And Bruce and I were mostly on the statistical and machine learning side that we both dabbled in the other.
Anyway, you know, we thought this was kind of unfair and it was in the moment it was a kind of debate that was going on.
Adrianne: Dave’s advisor reached out to the Carnegie Group and got in touch with this guy, Phil Hayes and Phil Hayes was extremely chill. He said, “Why don’t I give you this dataset? And you can work on it and do experiments using your different methods”.
Dave: And he sent it to us and so I ended up using that in my dissertation. It was actually the central data set that I used in doing experimentation on machine learning and natural language processing or text categorization.
Adrianne: How did this dataset become public?
Dave: Well, yeah, that was sort of accidental. I would say I and many computer scientists were probably a little more careless around IP issues back in those days and intellectual property issues.
I don’t think there was ever any formal document between Carnegie Group and UMS so I kind of carried the data set along with me. I did a research faculty position at the University of Chicago and then I was at Bell Labs, I was collaborating with a bunch of people and I just had the data up on an open FTP site.
John: Oh my god.
Adrianne: So Dave had the data up on an open FTP site, which is just a way to easily send large files and it didn’t even occur to him to put a password on it.
Billy: Oh, wow! So anybody on the internet could access this?
Regina: It was the Wild Wild West back then.
Dave: People traded around FTP sites in those days, with datasets pretty casually. And I had talked to Carnegie Group and Carnegie Group was looking into releasing it. We talked about whether we’ll do some sort of public announcement or maybe put it at the linguistic data consortium, which was just starting up back then. But what happened basically, was it just sort of diffused out there and started showing up in other papers.
Adrianne: Dave wanted to be very clear that he would never do this today.
Dave: I will say that it is sort of funny because over my career, I ended up later in life, working a lot with lawyers and doing expert witness work and building legal software and things and I’ve become much more fussy about intellectual property issues. I worked for a cyber security company now, too so I should say that I’m now very, very fussy about these things if anybody’s listening here,
Adrianne: You would put a password on it today.
Dave: I would put a password, yeah right. Well today there’d be like a 17 page legal agreement that’s been signed off by general councils and things.
Regina: It’s the 80s, you know?
Billy: Well, it also fits with the culture of open source software and people developing this stuff like wanting to be able to share things and have them work across different companies when they move or with other people they’re collaborating with. So it makes sense, it just seems like they didn’t have any formalized way to either say like, “Oh, one company owns this or yes, this is under an open license, Wild Wild West”.
Adrianne: Dave actually ended up collaborating with Reuters.
Dave: Reuters began to notice that these computer scientists were all using this weird thing they were calling the writers dataset, which nobody did Reuters seem to know where it came from or how.
So anyway, they decided that, you know, to their credit, they wanted a good, if there was going to be a Reuters data set out there, they wanted, and they also saw themselves as benefiting from people working with their data. They decided they would put a good dataset out there.
Adrianne: Reuters published a new dataset called RCV1. This data set was much larger. It had about 800,000 documents and it’s held at the National Institute of Standards and Technology. But for this dataset, you can’t just download it. You have to submit a request to NIST, and then you have to agree to a bunch of terms and conditions for how you’re going to use the data.
Dave was able to negotiate with Reuters to release a version of the data, but it wasn’t quite like Reuters-21578. The public dataset did not include the article text, like in the way it would have appeared originally.
Dave: I couldn’t release just the actual documents that somebody could read as if they were news. But Reuters was okay with releasing the set of words that occurred in the documents. If they’d been scrambled in order, which is fine for many machines. It’s not good for natural language processing, but it’s fine for many machine learning tasks.
Regina: I don’t really understand the difference between those two things. Like why would it be okay for one and not for the other?
Adrianne: I think it’s not good for natural language processing because it’s not in the natural language, right? Like if you’re trying to teach a computer what a normal sentence sounds like, and maybe generate its own sentences, that sound normal. It’s not going to be helpful for it to look at a word list. But if you were trying to teach a computer, just like what words are associated with other words in these articles, then it would be fine if they’re out of order.
Billy: Wait, the thing I’m still confused about is, so if they did this thing where they’re like, “Okay, yes, we can make this publicly available, but you have to scramble things” then why is the original one still available?
Adrianne: Well, it’s still useful and also they have no control over it at this point. I mean, they could go around DMCA-ing everybody.
Billy: But the original one isn’t officially available from them? It’s just in all of these other places now.
Adrianne: Yeah. I asked Dave about the Blah blah blah’s in Reuters-21578, big surprise, he didn’t know why they were there, but he was able to establish that he did not put them in. They were in the data before it got to him.
And so why didn’t you take out the Blah blah blah’s at this point?
Dave: We did our best not to mess with the raw data with the text, even when it seemed like there were errors or weird things in there because we viewed ourselves as cleaning up the formatting and the metadata not changing the original data. So, we weren’t sure where that came from my guess was it was kind of a filler.
Adrianne: So at this point, nobody from Carnegie Group remembers this. Dave remembers it being in there and says he wasn’t the one who added it. So, I felt like the next person I needed to talk to would be somebody from Reuters.
Adrianne: It’s 1986. What are you doing?
Steven: Well in 1986, I was working at Reuters and we were doing some experimental work in Artificial Intelligence.
Adrianne: This is Steven Weinstein, he worked at Reuters on the editorial side and ended up working on this project which at that time was called the Construe Topic Identification System.
How cutting edge was this at the time?
Steven: That had never been done before.
Adrianne: So pretty cutting edge?
Steven: I would say bleeding cutting edge, yes.
Adrianne: Steven told me he gets messages every so often about this dataset. And he even gets messages about the Blah blah blah thing.
Steven: It’s been coming up every handful of years for decades now, so it’s fun to think that these things live on in perpetuity.
Adrianne: He told me that when people ask him about the Blah blah blah thing, he usually ignores them, but I was so persistent that he called Peggy Andersen, who I spoke with before to try to dislodge this from their collective memory.
Steven: We think that the answer is that the way the system worked is it evaluated the text of the story, not the headline. And one of the things that we did back in that time in order to get news quickly out onto the Newswire was sometimes we would publish the headline first, we’d call it a flash or a bulletin, and we just put the headline out so there was a headline with no body to the text.
That would mess up the system when the system went to try to evaluate what was going on in the body of the text in order to categorize it or later on, do some other things with it. Since the system couldn’t process no data and come up with a reliable response, we believe, Peggy and I agree on this, that a little program was written that for items that didn’t have any texts, we would just have the dataset be updated to include Blah blah blah just as something there that we could key off of if we needed to identify those stories. So we think that’s Blah blah blah.
Adrianne: I see. So you don’t actually remember doing this, but you think after talking it over with Peggy that that’s what happened?
Steven: Well, Peggy and I had the same recollection about it. I don’t think we constructed it together. I think we both had the same memory of that’s how that came about. And it was in the dataset that we were using for testing. It was never in the feed that Reuters put out or the data that went into the database.
Adrianne: Steven’s story is that this was a hack, a temporary workaround. Reuters was sending out stories, separated by these control characters that would indicate where stories started and stopped. And these were things like ampersand, pounds, two semi-colon. The other markup, that SGML, was added by the Carnegie Group.
At first Steven’s group thought that the body of an article would be more important than the headline for categorization and that later changed. But the way he remembers it during this one period, the system was looking for a headline, skipping the headline, and then attempting to process whatever texts came immediately after that closed title tag. And so stories that had nothing there, after the title tag ended would cause the code to break.
Steven: So the system could look at Blah blah blah and say, “okay, there’s some texts there that I’m deciding not to evaluate rather than no text and breaking”.
Adrianne: And then, so that was done while you were testing, but at some point you would have to like that couldn’t go into the final product because your system would still have to deal with these body lists headlines?
Steven: That’s right. We changed the coding of the system to recognize what was the headline and what was the body of a story.
Adrianne: And why Blah blah blah?
Steven: I think we use Blah blah blah because there was no chance that would ever be in a news story and it was something that we could catch like XXX or some string of characters that if we needed to swap it out or pull it out of the dataset, it was a unique and distinct set of characters that wouldn’t affect anything else that was in the dataset.
Billy: But it’s also feasible that Reutors would quote somebody saying, “Blah blah blah”.
Adrianne: I asked Steven about that and he said there were different phases of the project where they looked at other types of stories but at this point they were just looking at financial stories and this phrase would never occur in a finance story. Reuters had really strict rules for those reporters like you couldn’t even say that stocks had “plummeted”. That was too editorialized.
Steven: Reuters didn’t actually ever publish anything that said “Blah blah blah”. I think reporters would have gotten fired if they said that.
Adrianne: And they couldn’t do a phrase like “No body text found” because that could potentially trigger some of the rules that they were writing for categorization.
Steven: “No” is a word that causes a lot of things to happen. And when “no” is paired up with other things, it certainly could create something going in the wrong direction.
Billy: Yeah. So you think if you wanted it to be unique, it would be like a random string of letters, it would be like, QWERTY, ASDF or something.
Adrianne: Steven said they also didn’t want to use anything that could potentially break something else. Like it’s possible that could have confused the system, which was relying on standard language dictionaries to parse the text.
Steven: I want it to be very careful about not causing a problem, not creating a problem by trying to work around another problem.
Billy: I think the thing also is now regardless of why they put it in there, it’s out there. It’s already been spread to all of these places as this kind of freely shared dataset. So it’s just like, it’s just in the mix now.
Steven: I would never think that it’s a dataset for 35 years and we’d still be talking about it now.
Adrianne: I decided it was time to call Jess.
One problem I ran into on this story is that every time I contacted someone, they were like, “Why do you care about this”?
Jess: I mean, it was in 1987. I feel like they’ve moved on with their lives. Not that’s not necessarily right, but I haven’t so yeah, let’s hear it.
Adrianne: I explained why the data set was created, who made it, how the Reuters feed relied on special control characters to separate stories and then SGML was added later.
Jess: I do love how 80s this whole story is. This is fantastic.
Adrianne: And I told her about the Blah blah blah’s. How they were a temporary fix that managed to stick around 30 years later.
Jess: Honestly, if it was like a movie, it would be a very boring movie, but in real life, it’s a quite exciting little venture of natural language processing development. That’s awesome! That’s more than I could have hoped for of the Blah blah blah’s.
John: That’s our show. Underunderstood is Adrianne Jeffries, Regina Dellea, Billy Disney and me, John Largomarsino. We’ll be back with another episode next week, Until then you can follow us on all the social media except for TikTok.
Regina: We definitely will have a TikTok soon.
Adrianne: Or consider joining us over on Patreon and you’ll get a bonus episode on Thursday. You’ll also be helping us pay for stuff like editing software and music. This episode came to us from a listener. Thank you, Jess. If you have a burning question that the internet can’t answer, drop us a line at hello@underunderstood.com. Maybe we can find the answer.
Billy: Thanks for listening.