take a shot every time someone begins a sentence with “most languages” and then describes a feature specific to european languages
I tend not to be a fan of the style of pedantry where a word has multiple definitions, and someone says it clearly meaning definition 1, and then you tell them they’re wrong because you insist on using definition 2.
It’s like if they say “average” as in “median”, and you correct them with the mean. You know what they meant.
Most languages you can learn in American high schools are European. Most languages you can play Pokémon in are European.
There’s no standard way to define “most languages”, and you know what they meant.
so disturbed when people say they don’t like vegetarian food. bro this is vegetables
By “vegetarian food”, they are probably referring to vegetarian imitation meat products. Which in my opinion aren’t very good (although the impossible burger was okay).
I think it’s much better, if you are trying to eat a vegetarian diet, to eat meals that are planned from the start not to have meat in them, rather than to simply replace the meat in an existing dish with some kind of soy-based product.
Like, spaghetti with tomato sauce is vegetarian, but it’s not specifically “vegetarian food”.
I’m really confused about where you got the idea that “vegetarian food” specifically refers to imitation meat products.
In my experience it more often means meals like “cheese pizza” or “mac and cheese.” Or “vegetarian bean tacos.” Or most kinds of Indian food.
Doesn’t it makes sense for all the foods you refer to as vegetarian to be called “vegetarian foods?” Rather than making a distinction between those and “foods that are vegetarian?”
Yes, anything without meat in it can be called vegetarian food, in the broad sense that it is vegetarian.
But if it’s in the vegetarian section of a store, or if someone is saying they don’t like “vegetarian food”, I think it likely refers specifically to those foods that vegetarians eat and other people mostly don’t. Which is typically imitation meat.
Yes, as someone else replied, some people legitimately don’t like many or most vegetables. But if someone says they don’t like “vegetarian food”, I wouldn’t take that to refer to potato chips — and yet I’d have a pretty good idea what they meant.
I wouldn’t normally take it to refer to macaroni and cheese, either. Cheese pizza and bean tacos perhaps less so, since those are pretty much cheaper and inferior versions of otherwise commonly meat-containing foods. So people are somewhat less likely to prefer them unless they are on a budget or have some kind of dietary restriction (e.g. Jews don’t eat meat and cheese at the same time, so no beef sausage pizza).
That’s the problem with “vegetarian options” that are commonly offered: if you make the same dish but leave the meat out (bean tacos) or use a substitute (veggie burgers) then yeah, it’s probably not going to be as good.
Whereas there are plenty of dishes, in Western cuisine or otherwise, that are designed from the start not to include meat, that aren’t thought of as “vegetarian food”.
People do generally like to have some protein tho, and I feel a lot of common non-meat proteins kinda memetically become ‘vegetarian food’. Like, Japanese tofu dishes or sag paneer are not vegetarian food in their original cultural context but kind of are in a western one.
Edit: I get that this isn’t what you mean; I think it has a negative halo effect
Sag paneer is totally vegetarian in the original culture. There is a tradition of vegetarianism in South Asian cultures going back centuries. Also, cheese and beans have a ton of protein.
Likewise Japanese tofu dishes, which may be as popular as they are because of veg*n Buddhists (this is apparently more relaxed in Japan in that Buddhist clergy are not required to be vegetarian, but I understand that many decide to forego meat anyway).
I can’t speak for sag paneer, but you and @loki-zen are wrong about Japan: Japanese (and Chinese) tofu dishes are very rarely vegetarian.
Japan’s most popular tofu dish is Mapo tofu:
• Japanized Chinese: 10/10, Japan looked at mapo tofu and thought “wow, a dish famous for being extremely spicy, let’s make that but not spicy” and I think that’s beautiful.
Mapo tofu has ground pork in it. Another common way to serve tofu is inarizushi, which can be vegetarian, but the usual way to make it involves katsuo (fish) stock.
I’ve eaten a lot of tofu in Japan and China and I can’t remember a single instance where it was vegetarian - if you ask for plain tofu by itself in Japan, it will be garnished with katsuobushi (fish flakes).
I’ve heard Japan is one of the worst places to be vegetarian, just because they don’t have much vegetarian culture, so even simple things that “seem” vegetarian will often have katsuo in it.
(I’ll also agree with Voxette in that mac & cheese is vegetarian food, but if someone says “I don’t like vegetarian food”, it’s intuitively obvious to me that they don’t mean mac & cheese.)
A couple years ago, I learned about a novel way of handling the concept of “identity” and presenting it to your users, and the more I think about it, the more it seems like Obviously The Right Way To Do It.
On most sites, you register with a username (or real name, or email, or whatever) and password; and when you interact with other users, their usernames are displayed to you as the handle you should refer to them by.
This has problems! Anybody in the world can register with the username “RichardStallman”. Or “RichardStaIlman”. Or “petrov_day_admin_account”. Or, if we’re talking about registering a domain name instead of a username, “bankofarnerica.com”. Or, for email, “larry.page@gmail.com”. Presenting some Internet rando’s chosen identifier to you and suggesting that that’s the mental handle you should use for them is crazy!
What if, instead, the site let you choose how to view other users’ names? When you first interact with a new person, their name shows up as an unmemorable red blob; then, if you think they’re worth remembering, you can assign them a name of your choice. This encourages you to have the appropriate default level of trust for the people you interact with, i.e. zero.
I think you reinvented phone numbers and contact lists and IP addresses.
But, like, we invented caller ID and DNS for a reason.
I do think things would be improved if you could nickname people with what you know them as. Facebook and Discord have started to support this sort of thing, which is nice.
But letting them pick a name to be known by default seems like a generally good thing.
Another good idea is marking “people you know” differently from “people you don’t”. Like SSH’s “you’ve never connected to this server before, was that intentional?” thing. Which would fix this general class of problem without introducing the usability issues.
And Twitter’s blue checkmarks are also somewhat similar, although they’re centralized “yes, we the Twitter company say that this guy is who they say they are”. I’m sure most of you don’t like centralized solutions, but I think a combination of both would be good.
Yea you know when I think of animanga I think of zero pandering to female audiences or gay/genderweird shit and definitely no pronoun fuckery like yes this would I say accurately characterises any of the manga I consume whatsoever at all lmfaoooooo
I mean, you have to admit that anime/manga has a lot less woke idpol than American fiction. The representation that does exist doesn’t have the whole “women aren’t allowed to have flaws” problem associated with the woke idpol pandering in recent US fiction.
But that’s probably mostly because Japan doesn’t really have a woke idpol movement. Anime/manga seems to do plenty of the same meta thing of pandering to specific target audiences (the technical term on this side is “fanservice”).
ugh i desperately want to say really mean things about diablo but its actually making the section on item rarity worse
i just
really hate diablo
but i also hate unclear communication
:(
I see you in the replies and you are #valid
I see people in the replies suggesting footnote, endnote, appendix and parenthetical.
But Have You Considered: Read Write Another Book?
hah, but I don’t have that much to say about diablo, it’s just that no matter where I put it it breaks the flow of what I do want to say about item rarity.
thoughts on path of exile’s item rarity system?
like, on the one hand i think it’s dumb that “rare” vs “unique” gives you basically zero info about which of these items is better. But otoh I think the system makes sense when you consider item rarity to be a piece of information about the item that tells you important structural things about the item, and rares do drop more than uniques!
Haven’t played PoE but I can tell from the question that I’d hate it.
(On August 27, 2018, I knew I wouldn’t be buying CP2077 because the 48 minute gameplay reveal trailer was released on that day and showed two things I both knew would be bad in that game - one of them: A purple submachinegun.)
The thing I don’t like about Path of Exile is that it locks you into a build.
It’s not like a hard lock, you get a certain number of regret points for misclicks and minor tweaks. But the game design is clear that you’re not allowed to experiment with different builds except by starting a new character. You’re supposed to have a build planned out ahead of you.
This, I think, is much less fun. In Diablo, as a new player, you learn by playing the game and experimenting with different builds. In Path of Exile, as a new player, you read a build guide and follow it exactly.
You can’t just play the game and experiment with different builds. If you do that, you could end up in an unwinnable state because you picked the wrong things. TV Tropes calls this “Unwinnable by Design, Cruel type” and “considered a design flaw by most design philosophies today”. Speedrunners call it a “soft lock” and it’s usually a bug.
And yes, this applies to any game that forces you to make irreversible choices for character progression. Which, to be fair to Path of Exile, describes a whole lot of games! It’s just that they all suck! [1]
It’s at least a little forgivable in, for instance, StarCraft II: Wings of Liberty, where the irreversible choices are relatively balanced and you’ll have a pretty easy time beating the game no matter which ones you choose. But it’s a completely different situation where you intentionally design a game where the skill trees are complex and there exist right and wrong answers that make the game unwinnable but also you don’t find out you made the game unwinnable until ten hours later.
[1] I got into some drama the last time I posted opinions on games, so let me be clear that when I say a game “sucks”, I just mean I don’t like it. It’s not intended to be an insult to the kinds of people who do like it, nor the devs who cater to them. But also feel free to consider TV Tropes’ phrasing: “considered a design flaw by most design philosophies today”.
I feel fine giving people mutually exclusive choices and holding them to their answer, but in return, I do have to promise eitherboth choices are valid (e.g. Wings of Liberty as per above) or the game has to be about making correct choices (e.g. each move in chess is a choice that may turn out to be a mistake and that’s fine).
To me, the big difference is that a chess game tends to last under an hour or two. There do exist slow chess games, but they’re not the usual way to play.
That’s where I’d draw the line. Either all choices are valid, or the game ends within an hourish. I don’t want to find out ten hours later that I made the wrong choice and the game is unwinnable now and I have to start over.
do you have any idea how betrayed I was when I came to America and it turns out ‘(naturally) red headed people’ just have high-saturation brown hair
I mean we come from Asia where black tea is red tea.
I feel like colors just get exaggerated a lot in general, when talking about things with low color variation. Black, yellow, white, red, and brown skin are not quite as colorful as the terms for them imply.
ugh i desperately want to say really mean things about diablo but its actually making the section on item rarity worse
i just
really hate diablo
but i also hate unclear communication
:(
I see you in the replies and you are #valid
I see people in the replies suggesting footnote, endnote, appendix and parenthetical.
But Have You Considered: Read Write Another Book?
hah, but I don’t have that much to say about diablo, it’s just that no matter where I put it it breaks the flow of what I do want to say about item rarity.
thoughts on path of exile’s item rarity system?
like, on the one hand i think it’s dumb that “rare” vs “unique” gives you basically zero info about which of these items is better. But otoh I think the system makes sense when you consider item rarity to be a piece of information about the item that tells you important structural things about the item, and rares do drop more than uniques!
Haven’t played PoE but I can tell from the question that I’d hate it.
(On August 27, 2018, I knew I wouldn’t be buying CP2077 because the 48 minute gameplay reveal trailer was released on that day and showed two things I both knew would be bad in that game - one of them: A purple submachinegun.)
The thing I don’t like about Path of Exile is that it locks you into a build.
It’s not like a hard lock, you get a certain number of regret points for misclicks and minor tweaks. But the game design is clear that you’re not allowed to experiment with different builds except by starting a new character. You’re supposed to have a build planned out ahead of you.
This, I think, is much less fun. In Diablo, as a new player, you learn by playing the game and experimenting with different builds. In Path of Exile, as a new player, you read a build guide and follow it exactly.
You can’t just play the game and experiment with different builds. If you do that, you could end up in an unwinnable state because you picked the wrong things. TV Tropes calls this “Unwinnable by Design, Cruel type” and “considered a design flaw by most design philosophies today”. Speedrunners call it a “soft lock” and it’s usually a bug.
And yes, this applies to any game that forces you to make irreversible choices for character progression. Which, to be fair to Path of Exile, describes a whole lot of games! It’s just that they all suck! [1]
It’s at least a little forgivable in, for instance, StarCraft II: Wings of Liberty, where the irreversible choices are relatively balanced and you’ll have a pretty easy time beating the game no matter which ones you choose. But it’s a completely different situation where you intentionally design a game where the skill trees are complex and there exist right and wrong answers that make the game unwinnable but also you don’t find out you made the game unwinnable until ten hours later.
[1] I got into some drama the last time I posted opinions on games, so let me be clear that when I say a game “sucks”, I just mean I don’t like it. It’s not intended to be an insult to the kinds of people who do like it, nor the devs who cater to them. But also feel free to consider TV Tropes’ phrasing: “considered a design flaw by most design philosophies today”.
This is why I crack my eggs on a paper towel over a flat countertop. I don’t get egg on the countertop, and then I can just crunch the whole thing up and huck it into the compost.
Yeah I don’t know where you all are getting your eggs but the ones I get all have inner membranes that are apparently made from the same material that’s layered into ablative anti-ballistic armor, so when I crack them on a flat surface all I end up with is an egg inside an indestructable membrane covered in shell shards. The only way I can consistently crack an egg is to thwack it on the rim of a bowl (or use a butter knife).
I do exactly what the Kitchn video does, except I hit it once against the edge of the bowl instead of the counter (if I’m frying an egg I hit the side of the pan instead). I never have problems with bits of eggshell ending up anywhere.
It’s interesting because there are plenty of things I can remember taking a while to learn, but cracking eggs was just never one of those things that I needed to learn. I just saw my mom do it, and I did it like she did, and it worked fine.
I wonder if it has to do with how hard you hit the egg?
I do just wanna say it’s kinda fucked to talk about a place of government being “desecrated” as if that fucking matters. Americans are on some whole other shit.
Come on, there’s no way the US is the only place that assigns symbolic/spiritual meaning to things.
Even if Europe is weirdly secular, I know for a fact that Asia has plenty of mysticism and superstition.
In Japan, when you say “please” or “sorry” and you want to emphasize how serious you are, it’s customary to bow.
So what do you do if you’re writing a sign?
It turns out, the answer is “you add a drawing of someone bowing”.
“We are currently being caused to strictly prohibit eating and drinking inside this shop.”
“Please take your trash home with you instead of leaving it here.”
“We are truly sorry for the inconvenience. We in the process of construction while keeping safety first. Please forgive us for the temporary intrusion.”
Unsurprising that this is the country that gave us emoji.
It’s pretty normal to see handwritten signs or letters with hand-drawn faces for extra expressiveness. You might see some in anime if you look closely!
In Japan, when you say “please” or “sorry” and you want to emphasize how serious you are, it’s customary to bow.
So what do you do if you’re writing a sign?
It turns out, the answer is “you add a drawing of someone bowing”.
“We are currently being caused to strictly prohibit eating and drinking inside this shop.”
“Please take your trash home with you instead of leaving it here.”
“We are truly sorry for the inconvenience. We in the process of construction while keeping safety first. Please forgive us for the temporary intrusion.”
I know that the square root of 64 is 8, and I want the square root of 69, so I’ll add some correction to 8.
If I do it with the formula
f(x+ε) = f(x) + f’(x) ε
where f(x) = √x, the correction is 5/16.
If I do it with one iteration of Newton’s method:
x - g(x) / g’(x)
where g(x) = x² - 64, the correction is also 5/16.
Not a coincidence I guess, and I guess I could derive Newton’s method from this, though I don’t see how just thinking about it, I’ll have to actually write something out.
Newton’s method is taking a linear approximation to the function you’re trying to solve, and setting it equal to zero. So it’s really
0 = g(x) + g’(x)(x+ε)
and solving for ε.
But it’s always been interesting to me that you’re getting the same correction, even though the f and g functions aren’t actually the same.
I don’t have a good intuitive explanation of this, and should work on it at some point.
Draw the graph of what you’re doing in each case, and you will see those graphs are exactly the same up to switching the x and y axes. With f you draw a tangent and intersect with a vertical line (x=69), with g you draw a tangent and intersect with a horizontal line (y=69). It works not only for square root but for the inverse of any differentiable function.
EDIT: When I wrote this I was thinking of g(x)=x^2, for g(x)=x^2-69 the horizontal line is y=0 ofc, but that’s the same picture after a vertical translation.
I’m aware of these methods, but honestly, if you’re trying to estimate because you don’t have a calculator, linearish interpolation is just easier.
8² = 64, 9² = 81, so if 69 is 5/17 of the way from 64 to 81, √69 is roughly 5/17 of the way from 8 to 9. The sqrt graph curves downward, so let’s round up, to 8.30.
I used to be able to get threeish sig figs with this method. Really impressed my friends when I was 11.
I tagged #shitpost and tried to write enough that even if you were confused the first read, a reread would make it clear. I guess that wasn’t enough.
It probably doesn’t help that JMDict (the dictionary that powers Jisho.org and most other popular dictionary apps/sites) leaves off “spicy” from its list of definitions of 辛口.
Here’s some packaging which lists it in both languages:
So yeah, the joke here is that while 辛口 actually means “spicy”, you should interpret it as meaning “mild” because Japanese people have basically zero spice tolerance, and thus anything the Japanese called “spicy” you would probably call “mild”.
In apology for the confusion, I will teach you another word, おおから ookara, the Japanese word for “extra spicy”.
おおから should also be interpreted as meaning “mild”.
There are over 190 countries in the world so it’s going to be rare for any country to be uniquely bad at anything. Even the USA’s famously high incarceration rates are probably beaten out by North Korea. The fact remains that even if the US isn’t uniquely bad at coronavirus response or imprisonment rates it is unusually bad. If 95% of the world is doing better than you then “the US should just copy what other countries are doing” is a good course of action even if the US isn’t “uniquely bad”
I’d actually guess that North Korea’s incarceration rate is much lower than the US’s. The US’s high incarceration rate I think has to be blamed on some weird incentive structures (I think the big one is for-profit prisons) that don’t really exist in most of the rest of the world.
I also think there’s something weird going on with the deterrence aspect. Most of Asia has much harsher drug laws than the US, but significantly lower incarceration rates because those drug laws do successfully deter most people from using drugs. I think the US has unusually inconsistent law enforcement, which causes this problem.
Of course, you could define “incarceration” weirdly, like “most countries restrict your freedom in a large number of ways, so there’s a sense in which most countries’ incarceration rate is 100%”. You know, how people talk about how everything made in China is made by slaves and stuff. I think this is the noncentral fallacy, but I mean, I’ve heard worse arguments.
Anyway, sure. “Uniquely bad” isn’t “unusually bad” isn’t “as bad as everyone else”. What I’m arguing for is that these are all useful distinctions to make, and we shouldn’t shoot the messenger who makes them.
For instance, if 95% of the world is doing better, “the US should copy the 5% that are doing worse” is clearly the wrong approach.
For COVID specifically, I think blind copying is pretty clearly wrong. Even if you say, look, Japan is doing a lot better, are they worth copying? Their COVID strategy is “tell everyone to wear masks and basically nothing else”. As far as we can tell, they’re just lucky.
(Japan’s explanation for why they’re doing better is that they’re simply a superior race. And yes, they used the same word for “superior race” as they used a centuryish ago to justify all the atrocities they committed in World War II.)
Anyway, I think there are plenty of specific things other countries are doing that the US should copy, and plenty that the US shouldn’t (like brag about how they’re the master race). If someone is unwilling to draw a distinction between “uniquely bad” and “worse than average”, it concerns me that they’re more interested in making excuses to hate people than to fix a problem.
Hue-lightness-chroma color space forming the shape of a top is way more intuitive than regular cylindrical HSL or HSV representations. You have one radial dimension, hue, which points at a spot on the color wheel, and you have a triangle with corners representing white, black, and bright to specify a point on as well. Just look at the images on this wikipedia article. The top shape makes way more sense!
Unfortunately, “chroma” and “chromasticity” are related but separate concepts, which is confusing. “Chroma” is how horizontally close you are to the bright corner of the triangular cut of color space. “Chromasticity” is that information plus hue.
Pink isn’t just light red. It can also be a whitish version of magenta and rose, and even orange! To my eye, I’d be tempted to just call all shades of magenta pink, but I wouldn’t be tempted to say the same about the other hues. Weird.
Fuchsia and magenta are the same damn color. I officially declare “fuchsia” deprecated.
I don’t have the patience to fully grasp the explanation, but the perceptive color wheel presented here is magnificent. It has four primary colors: Red, Yellow, Green, and Blue. And four intermediaries: Orange, Chartreuse, Cyan, and Purple. Compare that to this color wheel with three primary colors:
I think a good color wheel should match our intuitions of what shades get to be their own color. As a simple criterion, I like to think of the basic color word inventories of natural languages as the basis for judging this. Here’s a list of basic colors (source): white, black, red, green, yellow, blue, brown, purple, pink, orange, and gray. Ideally a color wheel should include as many items on this list as possible, and include as few items not on the list as possible. (Yes, I realize this is really controversial stuff and sweeps a lot of complexity under the rug. This is just a convenient model, and models are lies.)
Ignoring for the now the difference between magenta and purple, the three color wheel has mostly intuitive primaries and secondaries, maybe minus cyan, which most English speakers don’t distinguish from blue. But it doesn’t include orange, which is on the above color list. And if you want to include tertiaries to get orange, you have to deal with all the others, none of which appear on the list! (Well, unless you wanted to round magenta off to pink and consider violet the equivalent of purple. Still a lot of garbage colors, tho.)
The four color wheel’s primaries and tertiaries are all the same as the three color wheel’s, plus orange, but with the unfortunate addition of chartreuse. I’m not sure which color wheel I like better.
Things That Are Still Mysterious To Me:
How do cones with such high overlap result in trichromatic color vision? It’s basically like there’s only two types of cone from the look of it:
How exactly does the space of colors that computers can display differ from the space of colors humans can perceive? This seems complicated enough that I probably won’t bother figuring this out for a while.
Magenta is “50% red and 50% blue”, so I guess purple isn’t? Purple is the color between magenta and blue, then. Okay. Except it says here that in HTML purple is defined as 50% blue and 50% red, just at half intensity. Playing around with a color picker, I find myself agreeing that both of those definitions of purple register in my brain as “purple” just fine.
Actually, the whole deal with the ends of the color spectrum wrapping around to form a wheel makes no sense. I’m not clear on the perceptual difference between purple and violet, and I’m not clear on the physical difference between purple, violet, and magenta.
The former may be because true violet can’t be represented with RGB pixels? I think? So any explanation online won’t be able to point out the difference to me.
Apparently violet is a spectral color beyond blue, and purple is a combination of blue and red. So that means the color wheel leaves violet dangling outside the wheel with no place to fit in? Except some color wheels do have violet. Some have both violet and purple. And some sources say that purple is actually red plus violet, not red plus blue. What? If that’s true, how can you represent purple with RGB pixels?
Human color perception: very confusing.
I mostly answered these questions in my colors and light supplement:
I’m not clear on the perceptual difference between purple and violet, and I’m not clear on the physical difference between purple, violet, and magenta.
Purple, violet, and magenta are all umbrella terms for colors between red and blue. That’s why they can be confusing: purple can be a shade of violet, or violet can be a shade of purple; they’re often not very specific terms.
The colors are often defined approximately like this:
• magenta is a bright color halfway between red and blue
• purple is dark magenta
• violet is blueish purple
But these are approximate and people do often use violet to mean purple, or purple to mean reddish purple, etc, which is what some of this confusion comes from. The concepts themselves shouldn’t be confusing, it’s just the words that are a mess.
Apparently violet is a spectral color beyond blue, and purple is a combination of blue and red. So that means the color wheel leaves violet dangling outside the wheel with no place to fit in?
I hope my two posts have adequately explained this, but in case they haven’t: spectral violet and “mixture of red and blue” violet are the same color on the same part of the color wheel.
They may be different kinds of light, but human eyes can’t tell the difference between them, so we consider them the same color.
Every color wheel contains every color (well, every hue, but every color that isn’t black/grey/white has a hue corresponding to a part of the color wheel), though they might disagree on where any given color is.
How do cones with such high overlap result in trichromatic color vision?
If you look closely, the overlap isn’t that high. Especially on the right side: if you draw a vertical line at 600nm, notice that it doesn’t activate M cone cells very much, but nearly fully activates L cone cells.
On the left side, they’re closer, but still different enough to tell colors apart, and the amount of activation of S cone cells provides additional information. The L*/a*/b* post-processing your neurons do also further makes them look different.
But, also, this might be related to why there are so many more colors that look green to us, than any other color on the scale. Green is just a bigger chunk of the color wheel; people often notice that more randomly-generated colors look green than any other color (like in the xkcd color survey).
I started watching Technology Connections (because Voxette links to his videos a lot), and they had some interesting videos about color. While his other videos I think explain things well, the color videos seem like they’re listing facts without explaining them very well, I’m going to see if I can make it less confusing.
I think at this point most people know that all colors are made of different combinations of primary colors, red/green/blue.
But it’s easy to forget that three primary colors aren’t a fact about light, it’s a fact of how human color vision works. There are actually infinite primary colors of light, it’s just that humans can only see three of them.
The usual example is yellow: pure yellow light is a different thing from a combination of red and green light. But they both look yellow to humans, because humans don’t have yellow light receptors, so we can’t see the difference.
But while we can’t see the difference if the light goes directly into our eyes, it still has a difference if the light bounces off something else. For instance, if you shine pure yellow light onto a pure yellow box, it will look yellow, but if you shine red+blue light at a pure yellow box, it will look black.
This is also true of white light! A yellow box could look darker from a white lightbulb that’s just made of red+green+blue light, than from sunlight which is every color at once. That’s what light bulb boxes are talking about when they talk about “color rendering index” – how many colors their “white” light is made of.
The other fun fact people like to talk about is that magenta isn’t a “real” color. Unlike other colors, there’s no pure magenta light (it’s not a primary color of light), it can only be made as a combination of red and blue light.
This one points out that brown is just dark orange. Which means that brown isn’t a color of light, because no dark color is a color of light. This is at least slightly counterintuitive because light colors are colors of light.
Like, red is a color of light. Light red (pink) is a color of light (you can think of it as a mixture of a lot of red plus some blue and some green). But dark red (maroon) is not a color of light, it’s just red light, but less of it.
Because of this, nothing is ever brown by itself. Brown things are just orange. It’s only if you put them next to a brighter color that they become brown.
Oh, yeah, I can go a lot more in depth about color. Your link gets some things wrong (which might be why you’re confused by violet), so I’ll try to recap it and explain it better.
Magenta is pretty straightforward. In case my explanation earlier wasn’t enough: this is a color wheel (how we see color):
And this is the light spectrum (what light is actually made of):
The color wheel wraps the light spectrum by gluing together the two ends with a new color, magenta, which doesn’t exist in the light spectrum.
While the other colors are pure colors of light, magenta, as mentioned earlier, can only be seen as a mix of colors: it’s what you see if you see a lot of light from the edges of the spectrum (the red and blue parts) but not much from the middle (the green parts).
That’s all you need to know for how the wheel and the line are related, but it hints at an interesting little mystery: what’s up with violet?
Unlike magenta, violet does exist on the light spectrum, to the left of blue. And what’s weirder, it looks reddish blue! Why does it look reddish when it’s literally on the opposite side of the spectrum from real red?
Especially because cone cells have spectral sensitivities that look like this:
Red cone cells don’t respond to violet light (around 450nm) at all! What’s going on?
In 1931, the CIE XYZ color space was the first attempt to investigate. They asked humans what pure wavelengths of light looked like what mixes of light, and used math to get this graph of how the brain sees color:
This explains why violet looks reddish. Remember, violet is where blue and red peak at ~450 nm; pure blue is actually where green and red intersect at ~480 nm (which is why pure blue doesn’t look greenish or reddish, it’s exactly as green as it is red).
But we still have two mysteries.
Mystery one: this looks suspiciously nearly exactly like how cone cells respond to light. Except for that weird red bump in the violet wavelengths. What’s up with that?
And furthermore, S (“blue”) cone cells respond most to violet light, M (“green”) cone cells to yellow-green light, and L (“red”) cone cells to yellow-orange light. How did we end up with red, green, and blue as our primary colors?
Mystery two: some colors seem like opposites: Most colors mix just fine, reddish blue is purple. But some colors seem not to be mixable at all.
Reddish green isn’t a real color. You can mix red and green paint (and end up with something brownish) or red and green light (and end up with something yellowish), but neither of these look like how you would imagine reddish green. You straight-up can’t imagine reddish green.
Known opposites include red-green and yellow-blue. But these opposites don’t make any sense no matter what three primary colors you choose. If red/green/blue are the primary colors, why can green and blue mix, but not red and green?
These two mysteries were simultaneously solved by the CIELAB color space in 1976. It turns out, we don’t see in S/M/L or in R/G/B. We see in L*/a*/b*.
In other words, our brains see colors in terms of L* (how bright they are), a* (how red vs how green they are), and b* (how blue vs how yellow they are).
Perceptually, we actually have four primary colors! Red, green, blue, yellow.
When more L cone cells are activated, colors look redder on the red-green scale. When more M cone cells are activated, colors look greener on the red-green scale. And when more S cone cells are activated, colors look just a little bit redder on the red-green scale.
When more S cone cells are activated, colors look bluer on the blue-yellow scale. When more L and M cone cells are activated, colors look yellower on the blue-yellow scale.
And there we finally have it. Violet light looks reddish because S cone cells being activated makes it redder on the red-green scale.
(How does RGB fit into this? You can change your red/green light ratios to get wherever you want on the red-green scale. And after that, the amount of blue light will get you wherever you want on the blue-yellow scale. So RGB is enough to mostly span the amount of possible colors your eye can see.)
USA having 3000 deaths per day is like Poland having 370 deaths per day, and we’ve been doing worse than that for quite some time and nobody seemed to notice or care except for a single post by Slarti, so claiming that US is handling it uniquely bad still is an isolated demand for rigor to me.
Very few people have achieved the level of rigor set by Slarti irt coronavirus.
I have finally put my finger on why this kind of thing bothers me; it reminds me of the Iraq War, where criticism of our behavior there was met with, “Oh, you’re one of the blame America first crowd, huh?”
There was a constant attempt to deflect the question of whether the US was behaving badly (We absolutely were) to whether we were uniquely bad, to whether we were worse than everyone else or worse than other first world countries or worse than Saddam.
To the latter my opinion was “Who cares?”
The question of whether the US is uniquely bad at dealing with Coronavirus strikes me as nothing but an attempt to fog up the fairly obvious point that our government has been bad at dealing with Coronavirus, and send us spinning off into rather witless culture war bullshit.
Here’s an analogy: suppose you go back in time to the early 20th. Century, and you end up working for the Studebaker company. Now, you’re in a time when cars don’t have seatbelts, and you say, “Studebaker needs to start including seatbelts on every model, our cars are a lot less safe than they should be.”
Your boss comes back with, “Oh, so you think Studebaker is uniquely unsafe? I’ll have you know that we hew to the exact same guidelines as every other major auto manufacturer, and I have to wonder why you single us out when our safety features are industry standard.”
The fact that no automobiles have seatbelts standard doesn’t mean that nobody ought to have seatbelts standard, it means everybody is ignoring low-hanging fruit. And figuring how to get that low-hanging fruit is a hell of a lot more productive then explaining that, if you think about it, nobody’s picking it so it’s really suspicious that you’re signalling us out particularly.
I believe basically the exact opposite point. I think it’s very very important to distinguish between “bad, in the same way others are bad” and “uniquely bad”.
I think people often make bad predictions off bad models, and these bad models often come specifically from conflating these two.
If you think the US is uniquely bad, you will generate solutions like “the US should just copy what other countries are doing”. This is 100% the wrong solution if the US is actually bad in the same way others are bad.
Another symptom of this is an utter lack of empathy. If someone thinks I’m uniquely bad, they’re more likely to scold me rather than help me. If someone thinks I’m bad in the same way others are bad, they’re more likely to be sympathetic that maybe I’m dealing with a hard problem and the “obvious” solution they thought of has flaws that they’ve never realized because they’ve never experienced the problem.
I think classifying people as evil and irredeemable is distressingly common and should be fought at every turn, and pointing out that someone isn’t uniquely bad is an important part of this.
If you don’t understand why hypocrisy is bad, you end up with toxic double standards with the flimsiest justifications like “only white people can be racist because of my new definition of racism no one else uses”. Instead of caring to fix problems, you just make up some excuse why it doesn’t apply to you. No. We are otagaisama. Stop throwing stones in our glass house. We are perfectly situated to help each other.
If your reaction to “you do this too” is “deflecting! ad hominem! tu quoque!” you’re not wrong, you’re just an asshole. All they’re asking you to do is have a little empathy and acknowledge that they’re not uniquely bad. Is that really that hard?
Mild translation annoyance: audio: *anime character screams* subtitles: "I feel afraid". like it works, but it feels like you are adding extra coherency to the character that wasn't originally there.
Answer:
I think a lot of the thing is that Japanese people do say words where Americans say interjections which we don’t think of as words in the usual sense. A good example is getting hurt, where “ow!” in America is itai! (“painful!”) in Japan. Or fear, where “aaaaah!” in America is kowai! (“scary!”) in Japan.
These should be translated to “ow!” and “aaaaah!” but I can see how a translator might not realize this if they’re thinking too literally (focusing on translating individual words rather than thinking about what people would have said).
If 6-bit characters & 36 bit words are "cool," how about variable length words with 7 bits per character, like the IBM 1400 series? It had 6 data bits (so no lower case) plus a "word mark" bit, which indicated the last character in the word (hence variable word length). This doesn't count the parity bit. Each character corresponded to a decimal digit, so there was no difference between the computable and the string representation of a number; it was a decimal computer.
contemplating how things like “half as loud” and “half as cold” make no sense because those concepts only have meaning in a deeply unituitive physics sense, but “half as sweet” is meaningful insofar as it’s understood as a shorthand for “with half as much sweetener, by volume” and even in things with natural sweetness you can calibrate intuitively against that standard
wait, what?
“Half as cold” - I can understand your point, although I disagree with it, because the relevant temperature scale isn’t absolute but relative (-to-room-temperature).
But “half as loud” - I don’t understand at all. 0 dB, unlike 0 K, doesn’t require extreme experimental conditions to simulate.
I guess you could say that if room temperature is, say, 20 C, then 10 C is half as cold as 0 C, but I think most people wouldn’t use it that way.
But in “half as loud” the problem isn’t that it’s an ill-formed or useless concept but just that there’s no way to get a handle on it in practical life. You could define those terms but the definition wouldn’t be usable, because you have no way to compare the loudness of two sounds or to make sounds of different, ratio-specific volume to see how they compare
I guess you could say that if room temperature is, say, 20 C, then 10 C is half as cold as 0 C, but I think most people wouldn’t use it that way.
No, this isn’t true!
You cannot multiply or divide Celsius or Fahrenheit temperatures in any meaningful sense, as they are not ratio scales. 20 C is not twice as hot as 10 C. This is because the zero point is arbitrary.
You have to use Kelvin or Rankine. 20 K is twice as hot as 10 K.
you’re misunderstanding, they’re saying twice as cold not twice as hot and the ‘zero point’ in the coldness scale is room temperature not 0C.
Because above room temperature it’s not cold at all! it’s now hot!
so 10C is twice as cold as 15C if we’re calling 20C room temperature/ the temperature thats comfortable and neither hot or cold.
I have no idea what “twice as cold” means if not half as hot. I switched the valence of the example for clarity.
It doesn’t matter whether you call it a “coldness scale” or where you set the zero point; change the names and you’re still measuring the same thing: temperature.
10C is twice as cold as 15C if we’re calling 20C room temperature
I’m confused by your math here, even under your premises. How is 10 twice 15?
Now maybe what you want to measure is not temperature but subjective human experience of heat/coldness. That would seem to have both minimum and maximum points, and the minimum is considerably higher than absolute zero. But the minimum for damn sure isn’t room temperature.
Even then, 2 “degrees coldness” (18 C?) is not twice as cold as 1 “degree coldness” if your zero point isn’t as cold as it can get. It just isn’t, no matter what quality you’re measuring.
yeah it’s meant to be a subjective human scale. i’m not necessarily endorsing it but that’s what discoursedrome was describing with 10C being half as cold as 0C.
if 20C is the ‘zero’ point of neither hot or cold, 15C is -5C from comfortable and 10C is -10C and 0C is -20C. which is also -5K, -10K and -20K from comfortable.
when you’re looking at changes in temp saying twice the change makes some sense again, although probably doesn’t actually match subjective experience (iirc human experienceis often a log scale as seen with sound so maybe that could apply to temperature too?).
Another possible ‘zero ppint’ for comfort might be normal body temperature although not sure whether it should be core or skin or what. But the difference in temperature between you and what you touch determines how quickly energy is transferred either from you or to you. 20C was just an example but there’s probably some temp where of you set it as neither hot or cold them doubling the difference would double the rate of energy transfer I think?
Okay, I see the math now.
Anyway, what I am saying is there is no sense in which room temperature is the true zero point on any of these proposed scales. Maybe the most basic way of saying it is that in order for ratios to be meaningful, the zero point has to actually mean “none of it”. If it can have negative values, it doesn’t qualify.
And yes, a subjective human warmth scale surely is not one-to-one with temperature.
I don’t see how having negative values disqualifies something from being able to be used in ratios.
I can have twice as money as I had last week, even if it’s possible for me to have negative money (go into debt).
Similarly, I think it’s pretty intuitive to think of room temperature as “neither hot nor cold”, and then “twice as cold” means “twice as far below room temperature”.
To compare to a related thing, first consider that for solid things you touch, body temperature (37°C) is the zero point. Touching 33°C iron will cool you down twice as fast as touching 35°C iron, so it’s sensical to call it “twice as cold”.
For air temperature, the zero point is lower because your body is constantly producing heat, and room temperature is approximately the point at which your body “prefers” to be because it’s easiest to produce heat at the same rate you’re losing it. And distance from this preferred temperature is what makes your body work harder to survive. So you can think of “twice as hot” in terms of “twice as hard for your body to handle the heat”.
I started watching Technology Connections (because Voxette links to his videos a lot), and they had some interesting videos about color. While his other videos I think explain things well, the color videos seem like they’re listing facts without explaining them very well, I’m going to see if I can make it less confusing.
I think at this point most people know that all colors are made of different combinations of primary colors, red/green/blue.
But it’s easy to forget that three primary colors aren’t a fact about light, it’s a fact of how human color vision works. There are actually infinite primary colors of light, it’s just that humans can only see three of them.
The usual example is yellow: pure yellow light is a different thing from a combination of red and green light. But they both look yellow to humans, because humans don’t have yellow light receptors, so we can’t see the difference.
But while we can’t see the difference if the light goes directly into our eyes, it still has a difference if the light bounces off something else. For instance, if you shine pure yellow light onto a pure yellow box, it will look yellow, but if you shine red+blue light at a pure yellow box, it will look black.
This is also true of white light! A yellow box could look darker from a white lightbulb that’s just made of red+green+blue light, than from sunlight which is every color at once. That’s what light bulb boxes are talking about when they talk about “color rendering index” – how many colors their “white” light is made of.
The other fun fact people like to talk about is that magenta isn’t a “real” color. Unlike other colors, there’s no pure magenta light (it’s not a primary color of light), it can only be made as a combination of red and blue light.
This one points out that brown is just dark orange. Which means that brown isn’t a color of light, because no dark color is a color of light. This is at least slightly counterintuitive because light colors are colors of light.
Like, red is a color of light. Light red (pink) is a color of light (you can think of it as a mixture of a lot of red plus some blue and some green). But dark red (maroon) is not a color of light, it’s just red light, but less of it.
Because of this, nothing is ever brown by itself. Brown things are just orange. It’s only if you put them next to a brighter color that they become brown.
The most counter-intuitive thing I learned is there’s a really weird theorem proved in the 1970s that the marginal tax rate on the highest-earning person should be zero.
A well-known early result of the Mirrlees (1971) model is the optimality of a zero top
marginal tax rate. Recent work has undermined the practical relevance of this finding, but the
intuition behind it may still have important implications for the taxation of high earners.
The original Mirrlees argument runs as follows. Suppose there is a positive marginal tax
rate on the individual earning the top income in an economy, and suppose that income is y. The
positive marginal tax rate has a discouraging effect on the individual’s effort, generating an
efficiency cost. If the marginal tax rate on that earner was reduced to zero for any income
beyond y, then the same amount of revenue would be collected and the efficiency costs would be
avoided. Thus, a positive marginal tax on the top earner cannot be optimal.
This result, which has been called “striking and controversial” (Tuomala, 1990), is often
discounted as of limited practical relevance. Strictly speaking, this result applies only to a single
person at the very top of the income distribution, suggesting it might be a mere theoretical
curiosity. The potential to redistribute from the highest earner to the population as a whole may
justify large marginal rates on the second-highest earner and other high-ability taxpayers. […]
Nonetheless, the intuition behind the zero top rate result suggests that an important task
for policy analysis is to identify the shape of the high end of the ability distribution. In early
numerical simulations of the Mirrlees model, Tuomala (1990) finds, “it will be seen that in all
cases reported … the marginal tax rate falls as income increases except at income levels within
the bottom decile.” In Tuomala’s simulations, the efficiency costs of redistribution are large for
much of the high end of the income distribution, justifying declining rates for a broad range of
high incomes. These results suggest that the zero top rate result was an instructive, if extreme,
illustration of the power of incentive effects to counteract redistributive motives in setting
marginal rates on high earners. In contrast, Saez (2001), building on the work of Diamond
(1998), also carried out numerical simulations and concluded, in dramatic contrast to earlier
results, that marginal rates should rise between middle- and high-income earners, and that rates
at high incomes should “not be lower than 50% and may be as high as 80%.” The primary
difference between these findings seems to reside in the underlying assumptions about the shape
of the distribution of ability. Specifically, Tuomala assumed a lognormal distribution, whereas
Diamond and Saez argued that the right tail is better described by a Pareto distribution, which
has tails thicker than a lognormal.
As point 5 explains, optimal taxation theory (of course, under utilitarian normative assumptions) says that people should be “tagged” with characteristics like sex, height, and race, so that higher-“ability” (income-generating potential) people can be taxed more. For instance, lower taxes for women because they are more likely to be discouraged from working and higher taxes on tall people (or less welfare for them).
The paper notes, “Few economists advising political candidates or elected government officials would have the temerity to advance these ideas in any practical discussion of tax policy.”
Point 7 shows why, even under ethical assumptions that highly value redistribution, capital income should not be taxed:
The intuition for a zero capital tax can be developed in a number of ways. Two
possibilities draw on the results from the previous section. First, because capital equipment is an
intermediate input to the production of future output, the Diamond and Mirrlees (1971) result [that only final goods should be taxed] suggests that it should not be taxed. Second, because a capital tax is effectively a tax on future
consumption but not on current consumption, it violates the Atkinson and Stiglitz (1976)
prescription for uniform taxation. In fact, a capital tax imposes an ever-increasing tax on
consumption further in the future, so its violation of the principle of uniform commodity taxation
is extreme. […]
[A]ny tax on capital income will leave the after-tax
return to capital unchanged but raise the pre-tax return to capital, reducing the size of the capital
stock and aggregate output in the economy. This distortion is so large as to make any capital
income taxation suboptimal compared with labor income taxation, even from the perspective of
an individual with no savings (Mankiw, 2000).
There’s a weird exception to this under point 8, which says that tax rates should track people over time. Essentially, early retirees (those who have surprisingly low labor relative to capital income) should be penalized for trying to “cheat the system” by building up a self-sustaining nest egg and then ceasing to work as much as they could. They should have their capital income taxed, but this should be balanced by capital income subsidies to people who have surprisingly high labor income (I guess 90-year-olds who work 60 hours a week?). So on net, the government shouldn’t receive income from capital gains tax.
The conclusion remarks that some of these findings are respected by policy (such as by moving toward a flatter tax schedule) but many are not:
Why not? One possibility is that theory is right and that policymakers and the public are
slow to appreciate certain valuable but counterintuitive insights. Another possibility, at least as
plausible, is that broader tradition in public finance includes other ideas that are often ignored in
modern optimal tax theory, such as the benefits principle that a person’s tax liability should be
related to the benefits that individual receives from the government and the horizontal equity
principle that similar people should face similar tax burdens. Whether and how to incorporate
such ideas into the theory of optimal taxation remain open questions.
Hm, this paper doesn’t seem to cover Georgism at all, which is weird because it always seemed like the most optimal way to tax ever since I heard about it.
The idea behind Georgism is that most things that involve money are things that you don’t want to tax, because they’re things you want more of.
Doctors earn money when they save lives, farmers earn money when they make food, carpenters earn money when they build houses. In theory, most people earning money are doing things that people want them to do (so much that they’re willing to pay for it). Taxing them means we get less of that, which is bad. Ideally you only want to tax the things you don’t want to happen, i.e. you only want to tax externalities.
I’m pretty sure you know all this already. The hard part is, are there enough taxable externalities that you could run a government off them? Especially after you spend some of that money offsetting those externalities?
It turns out, there’s a big one: land ownership.
Land ownership is a weird issue. There’s a sense in which land should belong to everyone – after all, no human created the land, it’s a natural resource like breathing oxygen, and a lot of people want to be able to do things on land. It’s kind of weird for humans to say “this land is mine now, the rest of you have to stay off it.”
And yet, basically every government in the world enforces land property rights. (China is weird about it, you just rent land from the government for limited times, but it’s meant to achieve similar results to land ownership.) There are many reasons to allow land ownership. Most of the best uses of land (living in it, farming on it, building buildings to provide useful services with it) require some degree of control of it, enough to prevent other people from using it. And these are all really important things to be able to do.
And so, the compromise – we allow land ownership. And yet, land ownership is a pretty classic externality: owning land prevents other people from being using it. And the value of that externality is how much other people would pay to be able to use it.
Thus, land value tax. Go ahead and have your land. Just pay society for depriving everyone else of it. Want to keep your single-family home in an area with a housing shortage? Sure, just pay the difference for everyone who could have lived in an apartment complex there to live somewhere else.
Wikipedia has a lot of good things to say about it like “Because land value tax would apply to foreign land speculators, the Australian Treasury estimated that land value tax was unique in having a negative marginal excess burden, meaning that it would increase long-run living standards.”
Everything I’ve ever read on Georgism sounds like that. “Every other tax is bad for society, except for land value tax which is good for society”. It’s actually kind of confusing to me how non-mainstream it is.
I definitely had some good fucking insights in last night’s dreams, but they’re gone now. I really should start keeping a dream journal so I can tap into this stuff
Tried it once, reading it in the morning felt like reading the last words of a dying illiterate man hoping to buy gods forgiveness
Yeah, I get the impression that dreams (and certain drugs) give you the feeling that you have good insights, but don’t actually give you good insights.
Now, showering, good insights you think of while showering are 100% legit.
We don’t really have it afaik, except as something in a ‘non-alcoholic beer’ type niche, so you’d probably ask for a non-alcoholic cider (but I don’t think it’s as commonly available as non-alcoholic beer? I only remember seeing it this one place.)
For a non-alcoholic apple drink, people drink apple juice (or an ‘apple juice drink’), but I gather American soft cider is not the same thing as that, and is more like that nonalcoholic cider they used to have in a pub I used to go to.
For a harder alcoholic apple drink, there’s scrumpy. ;)
edit: you know what, I have a vague notion that they might drink soft cider in Cornwall but idk what they call it
So, in the beginning, text in computers was made of a mix of 95 characters, called ASCII, including 10 digits, 26 uppercase letters, 26 lowercase letters, and some punctuation. They came from typewriters (a lot of the weirdness in ASCII, like `, comes from typewriters), and they were good enough for the Americans who invented computers. [1]
But then people speaking other languages wanted to use computers! And ASCII wasn’t enough for their language. How ungrateful! The original solution to this problem was that different computers used different codes for different languages, and files from one computer just couldn’t be opened on other computers with different languages.
This was fine for a while, until people started wanting to do that. So then computers had to support multiple languages, and if you opened a file and it wasn’t in English, your computer would ask you which language it was in so it would know how to open it.
At some point, the internet started taking off, and having to ask your friend what language this file is in because you tried to open it with English and it didn’t work got annoying; and not being able to talk about words in one language when typing in another language got annoying; and not being able to copy/paste because your languages were wrong got annoying; and everyone decided to just get together in one huge committee and make one single Universal Code for all languages.
They called it Unicode, for short.
That’s what we use now, that’s why you don’t have to deal with any of that. You can just copy/paste text from anywhere to anywhere now. It’s wonderful and everyone loves it. [2]
…except Chinese and Japanese users.
You see, Unicode decided that Chinese hanzi, Japanese kanji, Korean hanja, and Vietnamese Chữ Nôm should use the same characters. (Korea and Vietnam don’t care much because they switched to using hangul and Latin letters, so these days it’s mostly China and Japan getting frustrated by it.)
This is sort of like deciding b (English B) and β (Greek Beta) should be the same letter, and it should just look like b if you were using an American font and β if you were using a Greek font.
By which I mean, it would seem like no big deal, until you were writing text somewhere you couldn’t choose the font, like… most social media including Tumblr, and then suddenly if you tried to write or copy/paste Greek, it would show up with a mix of English and Greek letters.
Unicode has its excuses for this. There are around 10,000 Chinese characters, and it would have been hard to make room for four lists of them, one for each language using slightly different characters. And most of the characters look similar enough that people can still recognize them.
But this is a real problem! You can go on the Internet and find people who learned the wrong 直 because they were trying to learn Japanese but their computer defaulted to Chinese fonts.
If you’re wondering which font yours defaults to:
This is mitigated by the characters usually not looking that different. 直 above is the worst offender. The differences are usually pretty minor:
In practice, the effects are:
• Native speakers have their default font set to that language, and only rarely get annoyed by the wrong character appearing.
• Language learners sometimes accidentally learn words wrong, if they didn’t think to set up their computer language ahead of time, or if they try to simultaneously learn Chinese and Japanese.
• People’s names are often a specific variant, and spelling their name correctly is often a big deal to them, which Unicode often prevents.
• Linguistics bloggers get massively frustrated. Grr!
Unicode did later add special codes called variant selectors which are intended to alleviate this problem, but unfortunately they have basically no support. :(
[1] So clearly “inventing computers” is kind of a vague concept and a lot of people instrumental in the long road towards modern computers, like Archimedes and Charles Babbage, were clearly not American, but you know what I mean.
[2] Also except people who design systems that need it to be possible to tell letters apart. Those people also hate it, because now people can do things like replace o (lowercase English O) with ο (lowercase Greek Omicron) to get past bad-word filters. But we already had that problem with l (lowercase L) and I (uppercase I).
This is the standard way to type Chinese on my computer. You can tell I’m writing Chinese because it says zhijie and not the Japanese chokusetsu.
And yet, if you look closely, you can also tell that it’s showing me the Japanese word chokusetsu:
And not the Chinese word zhijie:
This is a real modern up-to-date 2020 computer. It’s just that typing Chinese and Japanese on the same computer is an unsolved problem in 2020.
in average
are photos
are videos
are texts
are gifs
are audio