ChaoticNeutralCzech

joined 2 years ago

Nah. Still, I find them pretentious and prefer en-dashes (which the text is also littered with): 20 em-dashes (—) and 5 en-dashes (–) – counted by my text editor – is just too many.

[–] ChaoticNeutralCzech@lemmy.one 5 points 1 week ago* (last edited 14 hours ago) (1 children)

They have released a statement about this so rest assured it's OK.

Edit: I visited them at 39c3 and they're humans (with cat ears).

Not yet, the TLD application needs to be submitted (that's what costs all that money) and approved, so it will take about 1.5 years if successful.

What's your 🏴󠁥󠁳󠁣󠁴󠁿Catalan project BTW?

[–] ChaoticNeutralCzech@lemmy.one 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Well, they're a team of 6 including a dedicated graphic design & marketing person and they've produced a video and FAQ too, plus they've succeeded at bringing the ICANN application fee down as a non-profit. Yes, "kinship-based infrastructure" rubs me the wrong way too but because it reeks of corporate investor talk, not AI. So I'm pretty sure they did take the time to write the article and every piece of text on the website. Not to mention the legal document (bound by Belgian law) that ensures the money goes towards the stated mission.

I don't like that a big tech corporation can register .meow too but there's no avoiding that. Even the Catalan domain, whose purpose is to promote their language and culture, has seen "misuse" such as nyan.cat.

 

The most :3 top-level domain could become real! 100% queer-owned and queer-operated with proceeds funding LGBTQIA+ infrastructure.

[–] ChaoticNeutralCzech@lemmy.one 7 points 9 months ago

No way Teams is the most lightweight vehicle around

[–] ChaoticNeutralCzech@lemmy.one 3 points 10 months ago

Now do Krita

...oh wait

 

Désolé, je ne parle pas français et je ne peux donc pas le faire moi-même. Oui, c'est un peu hypocrite de ma part de rire de ce que je pense être une traduction automatique alors que j'en utilise une moi-même. Peut-être que les traductions françaises ne sont pas automatiques, mais je suppose que c'est le cas parce que la moitié des traductions allemandes sont ridicules.

Je pense que c'est l'un des plus mauvais:
Attention: Surveillez votre tête
I think this is one of the bad ones.

US company sells ridiculously machine-translated US safety signs that obviously don't follow European standards. Feel free to pick the funniest ones and make a collection.

Sorry, I can't speak French so I can't do that myself. Yes, it's a little hypocritical of me to laugh at what I think is a machine translation while using one myself. Maybe the French ones are not machine-translated but I'm guessing they might be because half of the German ones are ridiculous.

[–] ChaoticNeutralCzech@lemmy.one 1 points 11 months ago

You're right. Later in the video, this shot with the same fake film effect appears and that's indubitably AI (look at bottom right):

The video narration implied this is footage from a rare or unfinished film, though.

[–] ChaoticNeutralCzech@lemmy.one 2 points 11 months ago* (last edited 11 months ago) (1 children)

Which spoiler works in Thunder?

Lemmy syntax

    
spoiler Lemmy syntax <content> :::

:::

Reddit syntax: >!>!<content>!<!<

[–] ChaoticNeutralCzech@lemmy.one 5 points 11 months ago (2 children)

Are you implying it's from a stock footage site that used AI? They would definitely get their previews indexed on search engines. Alternatively, it's been generated on request, which would make it impossible to find.

[–] ChaoticNeutralCzech@lemmy.one 1 points 11 months ago (4 children)

I put them in a spoiler. Compliant viewers should hide them by default.

[–] ChaoticNeutralCzech@lemmy.one 5 points 11 months ago (1 children)

The film artifacts are quite unusual (the vertical lines span exactly one frame, one of the lighter spots stays on pretty much the same spot between frames 1 and 2) but I noticed no other red flags. In fact, the hair is very convincing. The eyes seem to reflect different things but her right one is somewhat in the shade and the light source reflection could be different because it's close to her face.

 

Found by @CrayonRosary@lemmy.world: it originates here: Dune by Alejandro Jodorowsky - Teaser Trailer (1976)


Source: used as B-roll in the intro of this video: https://www.youtube.com/watch?v=f8AJk2Sns_k&t=3

Here are individual frames but image search (SauceNAO, Google Lens, IQDB, Yandex) has not been helpful.

Frames




























Transcript: Close-up shot of a woman's face with a neutral expression, short brown '80s hair, lipstick, thick sharp eyeliner and glowing aqua irises. Widescreen with a higher-than-usual amount of film artifacts.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the last one in the series. Bye!

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Gazebo on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Infinity Gauntlet on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Frostpunk Automaton on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Land Dreadnought

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Crabsquid on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Seamoth and other Subnautica creatures in the comments

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: D20 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Knifehead Kaiju on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

view more: next ›