Kissaki

joined 2 years ago
MODERATOR OF
[–] Kissaki@programming.dev 11 points 2 months ago (2 children)

cdrewind Rewind CDROMs before ejection.

lol wut

[–] Kissaki@programming.dev 5 points 2 months ago

One of the two associations is in power and actively dismantling society. The other develops a technical product and runs a Lemmy instance many people and other instances have blocked.

Handling or concluding them a bit differently seems quite fine to me.

That being said, I've seen plenty of Lemmy dev connection criticism on this platform. I can't say the same about FUTO.

[–] Kissaki@programming.dev 3 points 2 months ago

No Gotos, All Subs

That's sub-optimal

😏

[–] Kissaki@programming.dev 2 points 2 months ago* (last edited 2 months ago) (1 children)

I don't think Microsoft will hold your hand. It's the local IT or usage support.

In my eyes the main issue is the decision makers falling for familiarity and marketing/sales pushing.

Which makes it even more absurd/ironic that after the switch investment, they invest again into a switch into something that is not really better.

Either way, this time though, there's a lot more relevance and pressure to make a change, and a lasting change. The environment is not the same as before.

[–] Kissaki@programming.dev 1 points 2 months ago

I diffusely remember reading about two/twice. But I can't provide sources either.

[–] Kissaki@programming.dev 6 points 2 months ago

What is the vulnerability, what is the attack vector, and how does it work? The technical context from the linked source Edera

This vulnerability is a desynchronization flaw that allows an attacker to "smuggle" additional archive entries into TAR extractions. It occurs when processing nested TAR files that exhibit a specific mismatch between their PAX extended headers and ustar headers.

The flaw stems from the parser's inconsistent logic when determining file data boundaries:

  1. A file entry has both PAX and ustar headers.
  2. The PAX header correctly specifies the actual file size (size=X, e.g., 1MB).
  3. The ustar header incorrectly specifies zero size (size=0).
  4. The vulnerable tokio-tar parser incorrectly advances the stream position based on the ustar size (0 bytes) instead of the PAX size (X bytes).

By advancing 0 bytes, the parser fails to skip over the actual file data (which is a nested TAR archive) and immediately encounters the next valid TAR header located at the start of the nested archive. It then incorrectly interprets the inner archive's headers as legitimate entries belonging to the outer archive.

This leads to:

  • File overwriting attacks within extraction directories.
  • Supply chain attacks via build system and package manager exploitation.
  • Bill-of-materials (BOM) bypass for security scanning.
[–] Kissaki@programming.dev 3 points 2 months ago

The attack surface is the flaw. The chain of trust is the flaw/risk.

Who's behind the project? Who has control? How's the release handled? What are the risks and vulnerabilities of the entirely product delivery?

It's much more obvious and established/vetted with Mozilla. With any other fork product, you first have to evaluate it yourself.

[–] Kissaki@programming.dev 4 points 3 months ago

GlobalSign certificat

😺

[–] Kissaki@programming.dev 9 points 3 months ago* (last edited 3 months ago)

The post How Functional Programming Shaped (and Twisted) Frontend Development from four days ago provides great broader context about the history and concerns. It ends with what this post seems to primarily concern itself with: The alternatives, improvements, innovation, and opportunities we may be missing / should evaluate.

[–] Kissaki@programming.dev 1 points 3 months ago

You could call yourself enlightened 😏

[–] Kissaki@programming.dev 7 points 3 months ago* (last edited 3 months ago)

I strongly disagree.

Coloring is categorization of code. Much like indent, spacing, line-breaking, aligning, it aids readability.

None of the examples they provided looked better, more appropriate, or more useful. None of the "tests" lead me to question my syntax highlighting. Quite the contrary.

By reducing the highlighting to what they seem important, they're losing the highlighting for other cases. The examples of highlighting only one or two things make it obvious. When you highlight only method heads, you gain clarity when reading on that level, across methods, but lose everything when reading the body.

I didn't particularly like their dark theme choice. Their initial example is certainly noisy, but you can have better themes and defaults with more subtle and more equal strength colors. The language or framework syntax and spacing can also influence it.

Bolding is very useful when color categorizes code to give additional structure discoverability, just like spacing does.

[–] Kissaki@programming.dev 7 points 3 months ago

I failed the question about remembering what colour my class definitions were, but you know what? I don’t care. All I want is for it to be visually distinct when I’m trying to parse a block of code

Between multiple IDEs, text editors, diff viewers and editors, and hosted tools like MR/review diff, they're not even consistently just one thing. For me, very practically and factually. Colors differ.

As you point out, they're entirely missing the point. What the colors are for and how they're being used.

 

This first push resulted in NuGet Restore times being cut in half, which was a reasonable stopping point for our work. However, along the way, we realized that a more extensive rewrite could improve performance by a factor of 5x or more.

Written from the perspective of several team members, this entry provides a deep dive into the internals of NuGet, as well as strategies to identify and address performance issues.

 

This first push resulted in NuGet Restore times being cut in half, which was a reasonable stopping point for our work. However, along the way, we realized that a more extensive rewrite could improve performance by a factor of 5x or more.

Written from the perspective of several team members, this entry provides a deep dive into the internals of NuGet, as well as strategies to identify and address performance issues.

 

In the rapidly evolving world of AI and machine learning, effective communication between models and applications is critical. The Model Context Protocol (MCP) is a standardized protocol designed to facilitate this communication by providing a structured way to exchange context and data between AI models and their clients.

The MCP C# SDK is in preview and APIs may change. We will continuously update this blog as the SDK evolves.

 

The Push Notification Hub (PNH) service recently went through significant modernization. We migrated from legacy components like .NET Framework 4.7.2 and custom HTTP server called “RestServer”, to .NET 8 and ASP.NET Core 8. Moreover, for handling outgoing requests, we moved from custom HTTP client/handler called “HttpPooler”, to Polly v8 and SocketsHttpHandler. This article describes the journey thus far and its impact on PNH performance.

Sections: Intro (what is PNH), expectations, measurement, migration phases (concrete tech and measurements), closing thoughts, next steps.

PNH is deriving great benefits from .NET 8. Overall performance improved, as evidenced by the Q-Factor metric, by about 70%. Performance is a major factor for a service like this and will reflect positively in basically all flows on Teams platform that got to do with messaging. The results actually exceeded our expectations by significant margin.

 

The Push Notification Hub (PNH) service recently went through significant modernization. We migrated from legacy components like .NET Framework 4.7.2 and custom HTTP server called “RestServer”, to .NET 8 and ASP.NET Core 8. Moreover, for handling outgoing requests, we moved from custom HTTP client/handler called “HttpPooler”, to Polly v8 and SocketsHttpHandler. This article describes the journey thus far and its impact on PNH performance.

Sections: Intro (what is PNH), expectations, measurement, migration phases (concrete tech and measurements), closing thoughts, next steps.

PNH is deriving great benefits from .NET 8. Overall performance improved, as evidenced by the Q-Factor metric, by about 70%. Performance is a major factor for a service like this and will reflect positively in basically all flows on Teams platform that got to do with messaging. The results actually exceeded our expectations by significant margin.

 

Dev containers are pre-configured, isolated environments that allow developers to work on projects without worrying about dependencies and configurations. They are particularly useful for trying out new technologies, as they provide a consistent and reproducible setup.

The containers are docker containers.

10
Cysharp libraries (cysharp.co.jp)
submitted 10 months ago* (last edited 10 months ago) by Kissaki@programming.dev to c/dotnet@programming.dev
 

Working together with Cygames to push the limits of performance of both server-side(.NET) and client-side(Unity) C# through open source.

GitHub https://github.com/Cysharp

  • MemoryPack: Extreme performance binary serializer for C# and Unity.
  • MagicOnion: Unified Realtime/API framework for .NET platform and Unity.
  • ConsoleAppFramework: Micro-framework for console applications to building CLI tools for .NET.
  • MasterMemory: Embedded Typed Readonly In-Memory Document Database for .NET and Unity.
  • ZString: Zero Allocation StringBuilder for .NET and Unity.
  • UniTask: Provides an efficient async/await integration for Unity.

The libraries look very interesting.

 

These services run on Azure compute and are primarily .NET based.

[.NET Aspire] lets us find all of those minor issues locally, and removes much of the need for full deployment to do our basic hookup validation.

.NET Aspire also automates emulator usage for Azure dependencies out of the box

4
submitted 10 months ago* (last edited 10 months ago) by Kissaki@programming.dev to c/visualstudio@programming.dev
view more: ‹ prev next ›