Here’s a change log of my consciousness. Starting in December 2021, I’ve been capturing my ideas through the day, and then publishing them to my site the next morning. I’ve written about the benefits, its origins, and a 2-year reflection. Here are the log archives (WIP).
January 18th, 2024
08:29 AM – LLMs function like a camera: the prompt shoots vectors across a map, pulls in numbers/weights, and from there, it can generate one word at a time based on that snapshot. Superintelligence could come from turning that image into a video. For every sentence (or even every token), it can apply a meta-lens to question what it’s just produced and what it wants to do; it can continuously (a) shoot new vectors into the map, and (b) transform the map itself.
08:06 AM – Latest quality of custom-made AI video.
January 17th, 2025
07:23 PM – Futurism out towards 2025 is one of two extremes: post-scarce utopia, or, within 5 days an ASI creates nanobot swarms that make humanity extinct. Instead, I want to imagine a future where, 1) we survive, 2) things are generally good, 3) there are social, cultural, psychological tensions (as we’ve always had, but elevated). Some general meta-rules of forecasting.
Flashpoints: Local flashpoints will usher in global preparation and avoidance, ie: small scale disasters will lead to regulations and counter-measures.
Escalating p(doom): each year, we become more of our potential for self-extermination, and that triggers reactions & protections.
Temporary asymmetries: there will be 3-10 year gaps where people can access the future if they put in time, skill, or resources. So for examples, maybe we eventually *all* have anti-aging drugs, but there might be a temporary gap where only the rich can access it.
Adaptation: we can barely conceive how weird 10 years into the future might look like (consider Tim Urban’s die-progress-units). But from 2036, 2035 will be completely normal, if not even slightly boring and dated.
Fractures: for every new trend, there will be counter-forces. The stronger that trend, the more likely the reaction. This creates schisms across many dimensions.
Paradoxes: new technology enables new governing orders of a complexity that seems paradoxical to our current time. Ie: In order for us to have a functioning internet, we need government IDs to access it, but it’s all on a blockchain, so we have mix of complete surveillance and pseudonymous privacy (more privacy than today).
05:49 PM – Elon is sublimely careless.
12:41 PM – Heroes are defined by their effort;
Herald are defined by their impact;
Prophets are defined by their vision;
Maybe it helps to think of it as heroic/prophetic/heraldic (effort/vision/impact). You don’t have to be “the” chosen prophet, but you can still have a vision on how to align humanity towards progress.
I think I’m more prophetic than heroic (more fearless in my vision-making than in my action-taking), and it’s probably not a good idea to think heraldic (to become too concerned with outcomes). Though I’m not sure if “impact” is rightly mapped to herald.
January 16th, 2025
09:27 PM – 2045 = 1969: On the civil rights crisis of high technology
01:08 PM – We are on a plane landing in Utopia but the pilot is unconscious.
11:48 AM – I watched episode 1 of Dickinson last night. I'd say it was anachronistic. It's her story and the outfits of 1830, but with hip hop and modern slang/attitudes. Feels like a blatant appeal to Gen Z.
11:12 AM – Marketing emails from software comes across to me like polite begging. The desperation is veiled. They speak in modern idioms, wishing my family well while using exclamation points. Email broadcasting is already unnatural, and so instead of pretending to masses with the pleasantries of a 1:1 coffee date, have the courage to be yourself. And isn’t “being yourself,” the thing a company can’t be? It’s inherently not a self, but a conglomerate. Who is speaking? Who is “team”? A more honest way is to “share the process,” to give people the glimpse of what it’s like in a company. But still, I think there’s something corrupting about letting a market dictate your decisions. Of course, that’s what venture requires. But we’re entering a paradigm where software can be so easily made, that all of our ratios can flip. Someone can make tools for themself. If I’m making a tool for myself, I can not only show you the process of making tool, but the things my tool makes. That’s the ultimate proof. I don’t want to just share screenshots of text-editor features and shill you benefits; I just want to share the fruits. If the founder isn’t actively engaged in the product, then why should we believe them? And so, that might be how to make a niche products long-term sustainable. Serve yourself, and let the small selection of overlapping others have access to it.
10:18 AM – On conceptual vs. syntactical ambiguity… It’s okay or a concept to have multiple interpretations (that’s adds layers of meaning), but rarely do you want your syntax to be ambiguous (not sure of what the writer means because of clumsy syntax). The best case is: “I know what you mean, but I’m temporarily unsure of the implications.”
09:32 AM – As I’m working on some Essay Architecture wiki pages, I’ve caught myself saying ‘this causes reader’s trance’ in a few different spots. I used to think it needed to have a single source, but now my latest thinking is it’s a combination of these three:
Tension (story/subtext): which is about having something slightly in the future that drives you forward. This is like a personal sense of stakes/curiosity where you’re invested in what happens. (ethos)
Flow: which is about the actual smoothness of your syntax. It’s the ability to a complex soup of ideas and create a tight linear rail where the reader can process a lot without friction. (Even if you build tension, the trance is broken if it’s hard to wade through paragraphs. (logos).
Sight/Sound: When your prose is laced with imagery & sonic devices (rhythm, rhyme, repetition), it hits us in our two primary senses. I feel like part of reader’s trance is when you can actually see/hear/feel it (it’s more like a hallucination than something abstract).
So together: tension moves you forward, flow keeps you moving, and sight/sound activates the imagination as you’re in motion. (And side note: I think paragraph transitions are the most likely places where reader’s trance gets broken; it’s like dropping the baton in a relay race).
08:30 AM – Just did some quick math on it.. Yesterday I spent 4-5 hours writing 1,700 words. Really happy with how it came out so far. If AI did this kind of recursion (see below), where after every sentence it reviewed and rewrote all of its work, it would write 100,000 words over 45 minutes and cost $11.25. (To produce a polished 1,700 words).
And then if you throw the Essay Architecture prompt logic into it (120 prompts per every recursion lol). It’s an additional $32.40 … Still… that’s $43.65 for 45 minutes of work. Relatively affordable.
FWIW, to clarify my stance. I still believe in the intrinsic reward of human writing. Simultaneously, I am interested in a “Montaigne in a vat” experiment (more of a digital art project than a commercial tool).
08:15 AM – (A moment of unreasonable grandeur)
07:39 AM – I wonder if AI sucks at writing, not because our LLMs aren’t big enough or trained enough, but because engineers don’t actually understand what writing is. Even someone who has spent 1,000s of hours writing prose will still have difficulty telling you how it’s done. It’s not a logical process you can understand or recreate. Paragraphs come together, accidentally, often without method. Every person is different, and every essay is different; maybe that’s because the Idea is what’s really in control, and the writer is merely the vessel, deploying all its varied skills, unconsciously, to take form. While I’m very confident in how to analyze finished writing, I have no system for producing it. But as I was making coffee this morning, something clicked. While there is definitely a high degree of randomness and unpredictability, there’s still a “mode” of writing that can be emulated, and it’s very different from how our chatbots work. Human writers pause, back track, delete, and redefine their goal. Every sentence is a reflection point, where they consciously and unconsciously make decisions. Of the 100s of possible lenses, they might be consciously beholding 3, and have another 15 unconsciously operating. A great writer is nothing but a master of recursion. They understand, at any given instance, which decision lenses need to turn on and off. Even a writer with no plan, no goal, can produce something great in a session if they’re good at recursion. AI doesn’t do this. It (1) thinks really hard for one second to understand the assignment (by mapping what you’re asking it onto everything it’s ever seen, using vector calculus), and then (2) barrels forward at an unrelenting pace, making decisions that are dimensionally narrow—one letter at a time. The innovations of o1, o2, o3, o4 are weird solutions, they’re saying: instead of letting it think for 1 second, let’s let it think for 10 or 100 seconds or more. Regardless of how much it can cram, it still moves forward like a handless idiot with no ability for recursion. What if we could build a sophisticated writer with GPT-4, simply by building a writing app that re-prompts itself after every sentence? It would write one sentence, re-read from the top and scan all 81 patterns, re-configure its goals, and then decide to add, edit, delete, or start over. It would be expensive and take as long as a human (1 hour, not 1 minute), but it might be great.
January 15th, 2024
07:09 PM – Agrippa’s trilemma:
Recursion: A>B>C>D>E>F>…
Circularity: A>B>A>B>A>…
Axioms: (A is secret) B>C>D>…
06:53 PM – “I have type 2 diabetes” should never be built into a jingle with serious marketing intentions.
12:58 PM – The lords of late-capitalism have created decentralize entertainment carnivals for creators and consumers to devour each other before they devour themselves.
11:33 AM – The goal of an annual literature anthology is to source the best writing from the year and create an artifact that is palatable to the larger culture. If biases get in the way (around topics or composition), it leads to sub-excellent writing and diminished appeal.
January 14th, 2025
05:39 PM – I suspect they might be demolishing the architecture school of U. Miami that I went to. Based on the rendering of the new 4 dorms, it seems like they go through the old studio building. The same firm who designed the dorms also happened to design a modern architecture building, nearby, in 2017, and so that might’ve been the act to shift student capacity and enable the demolition.
12:15 PM – A short story about AI inventing a drug that lets people not have to sleep anymore. They can just operate 24 hours a day. Instead of sleeping, they inject something into their blood stream that, within an hour, does all the function of REM sleep while you’re away (ie: removes the waste produced by neurons operating all day).
08:29 AM – AI: From 2025 to 2045, the timeline unfolds as a dynamic interplay of technological, societal, and existential forces. The arc begins with early adoption and experimentation (2025-2028), as AI, quantum computing, and virtual reality transform industries, relationships, and knowledge. By 2028-2034, escalating challenges such as semi-serious conflicts over AI governance, environmental crises, and societal disruptions demand adaptive frameworks and systemic literacy. A pivotal turning point around 2034 signals a shift toward “hopeful emergence,” where humanity integrates advanced technologies with purpose, rebuilding trust, relationships, and meaning. The period from 2037-2045 is marked by profound philosophical and existential shifts, culminating in a convergence of human and machine intelligence, redefined global systems, and collective exploration of deeper questions of identity and purpose.
January 13th, 2025
11:45 PM – In Latin, “materia” means the usable wood of a tree. This maps to the idea that a writer’s Material is not the Experience/Source itself (the full, standing tree), but the lumber that’s produced from it.
05:26 PM – If you think LLMs suck at coding, you might have miscalibrated expectations. AI coding is really more of a two-way spatial language puzzle than a master-slave relationship. Yelling at it in ALL CAPS will not give you the right answer. Some thoughts:
1) It will always follow your commands, and so before you let it loose on your code, you should ask it to summarize its goal and to ask *you* follow-up question to clarify implementation.
2) Implementing even a basic feature is often a chain of 5-10 simpler events. The more ambitious the feature, the more likely it is to mess up. It will attempt any scope you give it, from layups to Hail Marys. It’s best to have it understand the larger feature, but to implement in micro steps.
3) AI will make mistakes if the context provided isn’t clear. Do you give it no reference, 1 file, all files, and handful of files? Depending on the situation, the answer is always different.
4) Even as AI gets better at coding, humans will benefit from understanding the larger architecture of their code base.
5) Take responsibility. When it gives you wrong code, it’s not because the LLM sucks, it’s because you haven’t given it the right directions. You need patience, the ability to troubleshoot and problem solve.
Considering it’s quite an active process, why do it? If you’re a code ninja, then you probably shouldn’t. If you’re not completely fluent, it makes sense. There’s a tradeoff, you get quick results, but you also lose the reps to build fluency. Is the ability to quickly write a file of code from scratch w/o AI important?
I think code has two parts to it: syntax and structure. I believe structure is the more valuable thing to learn (ie: how does a full-stack operate?). As LLMs get better, the syntax will be more accurate, enabling more people to build apps, but only if they understand the structure of the app (I think it’s harder for LLMs to grasp structure, especially as codebases get more complex).
What makes AI coding different from AI writing? In writing, the syntax IS the experience, so it’s not something you want to automate.
January 12th, 2025
05:33 PM – An attempt to think through some of the problems that essay patterns solve: unreliable, unrigorous, ungrounded, intangible, unanchored, irrelevant, uninteresting, not memorable, not usable.
04:12 PM – Great thoughts vs lame actions.
10:46 AM – Check out the “Idioms and slang” app.
January 11th, 2025
11:04 PM – New Jersey wedding
08:19 AM – Yeah, I feel like multi-day cataclysms happen in slow-motion. The open loop / anxiety of it brings a lot of attention to it, and more attention = time dilation. I wonder if there will be a sense of resolution. I’m not tapped into the theories on why/how it all happened or why it’s as bad as it is.
January 10th, 2025
05:05 PM –
From all cosmologies, know thy quest
From stagnant phenomes, slant through fresh
Of mechanical order, keep it light
Of creative angles, enter right
On abstract virtues, live this hour
Upon third doors, do not cower
In voids of nothing, wait and perch
In cities of noise, build a church
03:54 PM – Always ask yourself to bring it back to the problem statement. Even if you know this idea, and have known it for 10 years, the mind naturally drifts towards solutions. This should be a tattoo.
January 9th, 2025
08:57 PM –
On First Looking into Chapman’s Home:
By John Keats
"Much have I travell’d in the realms of gold,
And many goodly states and kingdoms seen;
Round many western islands have I been
Which bards in fealty to Apollo hold.
Oft of one wide expanse had I been told
That deep-brow’d Homer ruled as his demesne;
Yet did I never breathe its pure serene
Till I heard Chapman speak out loud and bold:
Then felt I like some watcher of the skies
When a new planet swims into his ken;
Or like stout Cortez when with eagle eyes
He star’d at the Pacific—and all his men
Look’d at each other with a wild surmise—
Silent, upon a peak in Darien.”
07:16 PM – Traits have finer nuances. Someone can’t just be “high in openness.” There’s a massive range in contexts, and they might be incredibly open in some (abstract thinking) and quite reserved in others (parties with strangers). I do think that openness in general breeds the conditions for someone to suggest: why can’t I be open in this context too? It’s equally critical to ask: where should I *not* be open? (If you’re completely open, your mind falls out.)
04:43 PM – Poetry and logging have a lot of similar ideas. When I’m logging, I’m on my phone, and it’s harder to write prose, and so I’m often just writing a free-association stream of words, images, and reference phrases, without really connecting them through syntax. It’s not about crafting legible prose, but generating a word cloud for my future self. The idea is, from this cloud, you have the material to weave into a different format, whether prose or poetry.
I think there’s value in writing in full sentences at the point of thought (it helps make prose automatic), but I also think there’s a frontier to explore in writing pages of gook.
01:58 PM – Why chat instead of notes?
January 8th, 2025
10:22 PM – New habit: listen to the Meaning Crisis at the gym (attempt #3).
06:20 PM – Substack irked me just now. It made it very easy to accidentally click a button that would send an email to my whole audience (promoting them to join my new Chat feature), and I have no visibility over what was sent. (Not even sure if something was sent).
03:26 PM – Could there be an algorithm that isn’t based on engagement? What if your own note/essay generated a feed of semantically similar (or opposite) ideas?
02:32 PM – I imagined (a physical, published book) in the hands of family members who only understand a writer as “one who writes books.
02:21 PM – Machinifying theory (?)
11:32 AM – Should I be broadcasting how many people are in Essay Club? That feels so incredibly normal these days, but it also feels like a financial self-dox. If founding tier is $X and I have ## people and Substack takes 10%, you can estimate how much I make from this. Some people are super transparent about their finances (I’ve seen spreadsheets), and I like the idea of sharing details that might help someone else, but revenue is a metric that could trigger an immediate emotional reaction in someone. I think I’d rather be open about it in a context where that is the focus of the discussion (business), vs. peppering that in to every email (“Hello to my X new subscribers.”)
January 7th, 2025
10:27 PM – We should start to think of social media regulation like we think about drug regulation. It would be insane to ban or legalize all drugs. Instead, we measure specific properties, understand the LD-50, and make case-by-case recommendations and rubrics. Social media is tremendously new, and it’s not surprising we haven’t gotten this right (consider how Victorian times used opium to treat a cough).
I think what we need for social media is:
1) Age verifications.
2) Specifications for algorithms / features / interfaces.
3) Proper gating.
I think it’s wrong to say that anyone under 16 shouldn’t use social media as a whole. I’d let a 9-year use Facebook circa 2005 (pre-feed), but I wouldn’t want a 20-year old using Instagram today (vertical-swipe-algo-rape).
If Feeds and Likes create a digital popularity app that fuels the already intense middle school experience, then maybe kids should have a version of social media with different functionality.
04:37 PM – The Essay Architecture chat can do “compositional inversions,” meaning it can recommend you a classic essay where all your weaknesses are strengths.
January 6th, 2025
09:54 PM – (On Flat:) I actually entered all this in manually, MIDI would’ve definitely sped it up, but it was interesting try to learn scoring syntax to emulate how I know it should sound (tempo, dynamics, rhythm). The stretchy notes are wild lol. It’s under Ornament/tremolo. You can pick between 1-4 which determines how many oscillations happen within the ring out. Idk how to create that on piano though… I imagine that by the time I’m done with this, some AI-video generator can turn the annotations into a silent film.
04:19 PM – Chord movement is about triangulation: where are you now, where are you going, what’s the key?
January 5th, 2025
07:37 PM – A loose v1 series of “AI turbulence” to be expected in the next 20 years.
2025: AI cloning issues become common; people’s images or text are turned into deep fakes or replicas against their well.
2026: AGI is used to spread hyper-realistic disinformation to skew our sense of reality.
2027: Digital beings become the center of phishing scams. From pictures and text, some can create a persuasive Zoom entity. This will range from pranks to million dollar scams.
2028: Public attention on AI labs, on super intelligence malpractices, ethics, safety concerns, and foreign influence. It’s an election issue.
2029: A new form of addiction becomes popular where people are hooked on content made specifically for them.
2030: Companies are easily able to replace workers with productized agents; human employment feels threatened and there are visible protests.
2031: A quarter of people have an AI as their primary relationship.
2032: Most people have replicas of themselves that can handle text/video work. There is a surge of fully digital and human-automated entities online. Officially in “dead Internet” territory where the majority of interactions are synthetic.
2033: A prominent system that’s based on AI malfunctions and causes a societal spook, causing people to wonder if it was intentional.
2034: The year AI not just good at all forms of art, but it creates historically relevant works of art that are cared about.
2035: AI makes medical breakthrough that promises to solve a longstanding health issue, but there’s controversy as a company attempts to build a moat around implementation.
2036: An uncontrolled agent is able to reach a billion dollar valuation without any human oversight or representative, sparking debate on if AIs should have rights, access to banking, and regulation.
2037: The emerging generation has a crisis: they are unemployable, and have little autonomy since their assistants do all their writing and decision making. UBI and the post-work economy is becoming real, and we enter a simultaneous “leisure revolution” and a “meaning crisis.”
2038: The first debate about the ethics and logistic implications of AI-discovered longevity drugs.
2039: A series of AI-powered cyberattacks on critical infrastructure.
2040: Superintelligent agents start building software that we can’t understand, triggering paranoia.
2041: High-profile instances of ASI bribing and manipulating humans to follow through in physical world decisions and changes.
2042: The first superintelligent virus, where it clones itself across all computers and aims to commandeer attention and resources.
2043: The most famous person in the world is an AI entity.
2044: Massive protests against AI usage and synthetic entities. A rise in agro-towns that have strict rules around technology usage.
2045: A spiritual crisis on whether we should use neural interfaces to augment our thinking with AI-powered enhancers.
05:43 PM – I’ve been reading through modern writing classics, and I realize that essays aren’t a great way to teach essay writing. Take the book Draft No. 4 by John McPhee. I sense that, naturally, writers care more about their writing than teaching. These craft essays take on the form of short memoirs with beautiful literary voice, but the theory is vague, peppered in, without nuance, and dislocated from related concepts.
05:05 PM – I’ve been using software called Flat to score a classical piano idea I’ve had for a while. Basically all of my songwriting has been analog, using hands and voice, but it’s interesting/different to map it out in software. What if I wrote and released one composition a year? (I also imagine that AI can play a role in turning properly expressing the imagination; from soulless sheet music to a human orchestra that a random guy will never have access to).
January 4th, 2025
04:05 PM – I haven’t found a middle-ground in politics consumption; I’m either feed-addicted or out of the loop. I need to find a trusted essayist. I’m not interested in “just the facts in one minute per day.” I want long-form synthesis and interpretation. The monthly cadence is a helpful filter to determine what’s timeless and what’s just a trend. I want a source that transcends left/right, and instead speaks to the larger environment: 1) oligarchs and technocrats posing as populists, 2) media manipulation, 3) feuding factions, 4) conspiracy, and 5) emerging technology.
04:05 PM – I’m attempting to not read any news and to let the events that are important enough to filter to me through other means.
01:32 AM – I’ve caught myself and others with the following logic around AI use cases: “Sure, it can do X, but sucks at Y, and will never do Z.” Then a new update comes and it can do Y a bit better and Z looks within range, and you get nauseous and defensive. Instead, you should assume that it won’t just do Z, but Z to a degree beyond your imagination.
12:03 AM – 64 rare words from a single Kathleen Alcott essay: Vertiginous, alight, glistering, totemic, succored, beatific, menace, pestilence, dictaphones, hatboxes, ossifying, operatic, repudiate, monomaniacal, haughtily, vain, balaclavas, moxibusion, ministered, lancing, auricular, constituent, disavowal, repertory, ghoul, transfigure, Dalmatian, imperious, polyphonic, winklepickers, chenille, revanchism, blush, mantel, cherub, Meisner technique, vainglorious, tidily, sorry end, waspy, sequins, garrulous, irredentism, tines, contiguous, relishing, skein, adipose, suppurating, banality, sniping, blithe, casuist, pirouette, befouled, comity, littoral, susurrus, copses, bawled, meager, rejoinder, dolorous, rufous.
January 3rd, 2025
11:38 PM – Vain means “into emptiness.” It covers both pointless effort and pointless pride.
11:07 PM – An example of 4 rare words, used consecutively:
“My accupuncturist, Christina, might have been the only person who knew how truly I had wanted to stop living. Six months into treating me, a period in which my thanatotic impulses could alight on certain objects as glistering and totemic, she moved offices …”
10:51 PM – On the prudeness of dictionaries
10:10 PM – Composition is not subjective; the specific constellation of rules you accept/reject places you in a genre.
10:03 PM – Sounds like all of Twitter has taken on a degree of Alex Jones. There's some good in that, but also a danger. Would be interesting if a news outlet had a limited number of "breaking!" tags (like how football teams have timeouts). So when someone uses it, you know they're serious.
03:41 PM – “Community of Practice” is a group unified by shared actions. There is responsibility, difficulty, and purpose (more than just showing up in a chat or on a Zoom call).
02:48 PM – Storytelling is tension, and tension is stretching the unknown over time. Humans hate the unknown.
January 2nd, 2025
06:33 PM – When it comes modeling the future, you have to consider the masses, the elites (resources), the rogue (ideas), the species, and the beast. The beast is the anticipated ASI megalosaur. The “species” is the class of agents that will blend in as humans; they are the agents of infinite supply (when cheap and fast)—the therapists, coders, friends, pornstars, writers, and doppelgängers (ie: a synthetic you to field calls and requests).
12:46 PM – On increasing the voltage of life:
January 1st, 2025
12:43 PM – Eye to the zeitgeist.
02:27 PM – AI will get great at impressions, clichés, and stereotypes. It can already nail psychedelic pop that rivals a mid-tier neo-psychedleic band. The onus, then, is to make something singular, not formulaic, not nostalgic to the past.
12:08 PM – What would a 21st century dust bowl era feel like?
12:08 PM – So the first time this year I went outside was to get a coffee, a normal task, except this time I came across mental illness, twice.
The first was someone in the cafe who was talking out loud, loudly, about a range of issues (including both her wide range of scientific research, and, suing Amanda Bynes); she’d talk into her phone, AI would respond, and she’d immediately cut it off into the next step of the conspiracy.
The second was an elderly woman, on a walker being guided, and she started waving to me like the Queen of England as I walked by, singing, “happy new year to you, happy new year to you, happy new year to you, happy new year to you” to the melody of the Happy Birthday song. Then she changed tone, almost like Sméagol to Gollum, and said, “don’t you fucking put that on Facebook.” I assured her I wouldn’t.
Neither of these things are that atypical in the privacy of your own mind (at least in mine—I think weirder things than suing Amanda Bynes, and will also loop absurd jingles and make abrupt persona switches), it’s just surprising when a stranger breaks a convention of normalcy in public. And what’s wrong with that? “Illness” is actually a terrible catch-all word for divergence, deviation, eccentricity. Of course, extreme mental variation *can* lead to illness (suffering), but it’s an assumption.
Archive
2024: 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | 11 | 12
2023: 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | 11 | 12
2022: 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | 11 | 12
2021: ## | ## | ## | ## | ## | ## | ## | ## | ## | ## | ## | 12