It seems that the way to get booked at important venues and shows often has little correlation with pure merit/pure comedic ability but more to do with how well connected you are to the booking personnel in addition to political correctness, social media presence, profile demographic, social affiliations, identity politics, etc. and more. I’ve seen a lot of comedians suffer under this system all over the country in NYC, Denver and LA. I would know, lol. I’m very poorly connected, socially awkward, say the wrong things all the time and have always been bad at networking with people.
I have an idea. In the future, we should preferably have an impartial, objective entity judge a comedian’s value, or a joke’s value. Theoretically speaking, this is possible now through AI technology. This would just require time and effort to train a machine learning model that can deduce a comedian’s ability but I believe with enough data gathering, it could be done. And once we come up with this AI, we could install this AI onto any computer which would be present at every comedian audition, show, televised performance, and more. This will give both comedians and the audience (if wanted) with the AI-determined feedback and results making sure that the whole process of “judging a comedian” was “transparent, fair, unbiased and largely based on objective data” completely backed by the ML model’s objective computation of immense comedy data.
Essentially, an example of what I’m talking about is an artificial intelligence that LEARNS how to laugh at great jokes. So you could be talking to this AI (in the form of an app, or on your laptop) all the time telling jokes and try to make it laugh. And make you a better comedian in the process. It could very well be used as a portable training tool because it learns how to laugh at great jokes, not bad jokes. Or it could learn to laugh at things matching the sense of humor of a particular person, let’s say a 25-year old female white woman from New York or a 50-year old bilingual male from Canada. And also learn to laugh at different things that different demographics would laugh at.
Two types of input data come to mind:
- First is joke textual data, meaning jokes written out in plain text.
- Second is auditory or video data, meaning jokes told to an audience and analyzed through audio recordings or video recordings.
(Another possible input data type is number of recommendations from other proven, successful comedians. Kinda like Google’s “Backlink” mechanism that adds validity to a website based on how many links it’s getting from other websites, those recommendations will add validity to the fact that something is a good joke, although I put this as just an option because any human element in this algorithm has the chance of being manipulated through favoritism or nepotism)
So in simple terms, we could start training a machine learning model that will get fed all the jokes told by past comedians that we know to be “great”. We need to write these out in plain text, word for word. This would be a lot of work. We are talking maybe 5,000 to 10,000 great jokes ever told by all the past greats, say, Kaufman, Robin Williams, Pryor, Seinfeld, Bill Hicks, George Carlin, Mitch Hedberg, Chappelle, Joan Rivers, you name it.
Once we do that, we have an AI that can see a sentence in text form and try to deduce if it looks like a good joke or not, hopefully with 80-90%+ accuracy. We could potentially add other labels on the joke like “outdated” meaning it could have been a good joke that worked in Pryor’s time but likely won’t work today. Or labels like “potentially offensive toward LGBTQ” or “potentially offensive toward Latinos”. This would be possible through a well-trained model … theoretically. And then once we feed in YOUR joke, it will run its trained algorithm and say “Hmm, this joke is potentially offensive toward that race with about 30% chance but could be very funny with about 85% accuracy.” Other label possibilities include “Rule of 3,” “Blue,” “Religion,” “Self-Deprecation,” “Pun,” etc. We have very accurate ML models now that can detect images and videos and tell you what’s exactly going on inside the images with great accuracy so this part should be quite straight-forward.
So that’s the text algorithm part. But that’s not enough because text alone cannot capture a comedian’s delivery, which is really gold. Jokes are not just textual words. So audio analysis or video analysis will help immensely with this machine learning model. But since video files are so large in size, it would be difficult to try to add video in from the beginning. So we could stick to audio recordings to start. Recordings which every comedian should have, because we are always recording on our phones.
Based on laughter feedback from audiences captured inside the audio recordings, especially at real shows, once we feed that into the machine learning model, it will get good feedback that “okay, so that joke is funny,” but also “okay, if you tell that joke in this manner, it doesn’t work that well but if you add intonation and say this part louder, then this joke is funny.” We are really getting granular here so this is not going to be easy but an audio data expert/engineer would be able to help us determine what is actually possible to calculate through audio data or not. At the least, after feeding this ML model like 5,000 to 10,000+ audio recordings of not just our own jokes now but also past greats, we would be able to deduce the general formula, the general sound pattern of what a joke sounds like. So at least this model will know, “oh, man, that sounds like a great joke.”
There’s lots of things missing in this ML model like prop comedy or the visual elements of comedy. If we don’t feed it video, there’s no way this AI will know what’s going on the screen through audio alone so it won’t take in account facial expressions, gestures, props and so on. So that will take video data analysis which could be a daunting but a very intellectually challenging project.
Theoretically, after this AI is fairly accurate and complete, the next possible sophistication could be to do take in the entire “set” of a comedian’s act. Instead of judging comedians by individual jokes alone, how did he/she arrange the jokes? What was the overall reception at the end of this set? Does this person’s set model a great comedian’s set arc/set order pattern? So Jane could have told 1. A+ joke 2. C- Joke 3. B joke, and instead of just totaling up those jokes and averaging them out, and saying Jane is kind of a B-level comedian, you could feed the ML entire sets like 100 of recordings to help it determine “okay, well Jane did badly that night but overall speaking with 95% accuracy, Jane is an A-level comedian.”
Just some ideas from this morning! 🙂
If not, we can just stick to our shitty system right now of relying on other people’s judgments forever LOL Thanks for reading.
One easy MVP (minimal viable product) of this idea could be a “Funny or Not” app. Basically akin to the app “Hotdog or Not Hot Dog” invented by the character Jian Yang played by Jimmy Yang on Silicon Valley, you could feed jokes into this app and over a large sample size, it will compare that joke into the ML algorithm that determines great jokes and tell you if anybody will laugh at this joke. But even before training a complex ML model, you could just have one person assess 1,000 jokes by himself/herself and feed the ML model “I like this joke” vs “I don’t like this joke” so that it comes with its own algorithm based on that particular person’s taste. So it would be a “Funny or Not Funny” determination app per Joe’s opinion or Melissa’s opinion. At the very least, it could offer guidance to beginner comics that a joke you are trying to do is unnecessarily long, or racist, or problematic, or unclear, or too derivative of some other joke in the past.