this post was submitted on 24 Jan 2026
61 points (93.0% liked)

Programming

24873 readers
279 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

There exists a peculiar amnesia in software engineering regarding XML. Mention it in most circles and you will receive knowing smiles, dismissive waves, the sort of patronizing acknowledgment reserved for technologies deemed passé. "Oh, XML," they say, as if the very syllables carry the weight of obsolescence. "We use JSON now. Much cleaner."

top 50 comments
sorted by: hot top controversial new old
[–] pinball_wizard@lemmy.zip 7 points 6 days ago* (last edited 6 days ago) (1 children)

When you receive an XML document, you can verify its structure before you ever parse its content. This is not a luxury. This is basic engineering hygiene.

This is actually why my colleagues and I helped kill off XML.

XML APIs require extensive expertise to upgrade asynchronously (and this expertise is vanishingly rare). More typically all XML endpoints must be upgraded during the same unscheduled downtime.

JSON allows unexpected fields to be added and ignored until each participant can be upgraded, separately and asynchronously. It makes a massive difference in the resilience of the overall system.

I really really liked XML when I first adopted it, because before that I was flinging binary data across the web, which was utterly awful.

But XML for the web is exactly where it belongs - buried and forgotten.

Also, it is worth noting that JSON can be validated to satisfy that engineering impulse. The serialize/deserialize step will catch basic flaws, and then the validator simply has to be designed to know which JSON fields it should actually care about. This gets much more resilient results than XMLs brittle all-in-one shema specification system - which immediately becomes stale, and isn't actually correct for every endpoint, anyway.

The shared single schema typically described every requirement of every endpoint, not any single endpoint's actual needs. This resulted in needless brittleness, and is one reason we had such a strong push for "microservices". Microservices could each justify their own schema, and so be a bit less brittle.

That said, I would love a good standard declarative configuration JSON validator, as long as it supported custom configs at each endpoint.

[–] asret@lemmy.zip 1 points 5 days ago (1 children)

I'm not sure I follow the all-in-one schema issue? Won't each endpoint have its own schema for its response? And if you're updating things asynchronously then doesn't versioning each endpoint effectively solve all the problems? That way you have all the resilience of the xml validation along with the flexibility of supplying older objects until each participant is updated.

[–] pinball_wizard@lemmy.zip 2 points 5 days ago* (last edited 5 days ago)

Won't each endpoint have its own schema for its response?

They should, but often didn't. Today's IT folks consider microservices the reasonable default. But the logic back when XML was popular tended to be "XML APIs are very expensive to maintain. Let us save time and only maintain one."

And if you're updating things asynchronously then doesn't versioning each endpoint effectively solve all the problems?

XML schema validation meant that if anything changed on any endpoint covered by the schema, all messages would start failing. This was completely preventable, but only by an expert in the XML specification - and there were very few such experts. It was much more common to shut everything down, upgrade everything, and hope it all came back online.

But yes, splitting the endpoint into separate schema files solved many of the issues. It just did so too late to make much difference in the hatred for it.

And really, the remaining issues with the XML stack - dependency hell due to sprawling useless feature set, poor documentation, and huge security holes due to sprawling useless feature set - were still enough to put the last nail in it's coffin.

[–] schnurrito@discuss.tchncs.de 4 points 6 days ago

XML is best suited for storing documents, JSON for transmitting application data over networks.

SVG is an example of an excellent use of XML, it doesn't mean we should use XML for transmitting data from a backend to a frontend.

[–] entwine@programming.dev 4 points 6 days ago

I agree with everything this article said. A lot of software would work better if devs took the time to learn and appreciate XML. Many times I've found myself reinventing shit XML gives you for free.

...But at the same time, if I'm working on a developer-facing product of any kind, I know that choosing XML over JSON is going to turn a lot of people away.

[–] erebion@news.erebion.eu 3 points 6 days ago

XMPP shows pretty well that XML can do things that cannot be done easily without it. XMPP wouldn't work nearly as well with JSON. Namespaces are a super power.

[–] phoenixz@lemmy.ca 1 points 6 days ago

I'm sure XML has its uses

I'm also sure that for 99% of the applications out there, XML is overkill and over complicated, making things slower and more error prone

Use JSON, and you'll be fine. If you really really need XML then you probably already know why

[–] Colloidal@programming.dev 1 points 6 days ago

ASN.1 crying in the corner.

[–] calliope@retrolemmy.com 34 points 1 week ago* (last edited 1 week ago) (2 children)

There exists a peculiar amnesia in software engineering regarding XML

That’s for sure. But not in the way the author means.

There exists a pattern in software development where people who weren’t around when the debate was actually happening write another theory-based article rehashing old debates like they’re saying something new. Every ten years or so!

The amnesia is coming from inside the article.

[XML] was abandoned because JavaScript won. The browser won.

This comes across as remarkably naive to me. JavaScript and the browser didn’t “win” in this case.

JSON is just vastly simpler to read and reason about for every purpose other than configuration files that are being parsed by someone else. Yaml is even more human-readable and easier to parse for most configuration uses… which is why people writing the configuration parser would rather use it than XML.

Libraries to parse XML were/are extremely complex, by definition. Schemas work great as long as you’re not constantly changing them! Which, unfortunately, happens a lot in projects that are earlier in development.

Switching to JSON for data reduced frustration during development by a massive amount. Since most development isn’t building on defined schemas, the supposed massive benefits of XML were nonexistent in practice.

Even for configuration, the amount of “boilerplate” in XML is atrocious and there are (slightly) better things to use. Everyone used XML for configuration for Java twenty years ago, which was one of the popular backend languages (this author foolishly complains about Java too). I still dread the massive XML configuration files of past Java. Yaml is confusing in other ways, but XML is awful to work on and parse with any regularity.

I used XML extensively back when everyone writing asynchronous web requests was debating between using the two (in “AJAX”, the X stands for XML).

Once people started using JSON for data, they never went back to XML.

Syntax highlighting only works in your editor, and even then it doesn’t help that much if you have a lot of data (like configuration files for large applications). Browsers could even display JSON with syntax highlighting in the browser, for obvious reasons — JSON is vastly simpler and easier to parse.

[–] tyler@programming.dev 3 points 6 days ago

God, fucking camel and hibernate xml were the worst. And I was working with that not even 15 years ago!

[–] Kissaki@programming.dev 7 points 1 week ago* (last edited 1 week ago) (1 children)

Making XML schemas work was often a hassle. You have a schema ID, and sometimes you can open or load the schema through that URL. Other times, it serves only as an identifier and your tooling/IDE must support ID to local xsd file mappings that you configure.

Every time it didn't immediately work, you'd think: Man, why don't they publish the schema under that public URL.

[–] calliope@retrolemmy.com 6 points 1 week ago* (last edited 1 week ago)

This seriously sounds like a nightmare.

It’s giving me Eclipse IDE flashbacks where it seemed so complicated to configure I just hoped it didn’t break. There were a lot of those, actually.

[–] epyon22@sh.itjust.works 24 points 1 week ago (1 children)

The fact that json serializes easily to basic data structures simplifies code so much. Most use cases don't need fully sematic data storage much of which you have to write the same amount of documentation about the data structures anyways. I'll give XML one thing though, schemas are nice and easy, but high barrier to entry in json.

[–] Kissaki@programming.dev 7 points 1 week ago (2 children)

Most use cases don’t need fully sematic data storage

If both sides have a shared data model it's a good base model without further needs. Anything else quickly becomes complicated because of the dynamic nature of JSON - at least if you want a robust or well-documented solution.

[–] SlurpingPus@lemmy.world 4 points 6 days ago (1 children)

If both sides have a shared data model

If the sides don't have a common understanding of the data structure, no format under the sun will help.

[–] Kissaki@programming.dev 1 points 6 days ago

The point is that there are degrees to readability, specificity, and obviousness, even without a common understanding. Self-describing data, much like self-describing code, is different from a dense serialization without much support in that regard.

load more comments (1 replies)
[–] Feyd@programming.dev 20 points 1 week ago* (last edited 1 week ago)

Honestly, anyone pining for all the features of XML probably didn't live through the time when XML was used for everything. It was actually a fucking nightmare to account for the existence of all those features because the fact they existed meant someone could use them and feed them into your system. They were also the source of a lot of security flaws.

This article looks like it was written by someone that wasn't there, and they're calling people telling them the truth that they are liars because they think features they found in w3c schools look cool.

[–] Diplomjodler3@lemmy.world 18 points 1 week ago (1 children)

It's true, though, that JSON is just better for most applications.

[–] MonkderVierte@lemmy.zip 10 points 1 week ago (19 children)

Except config files. Please don't do config files in json.

load more comments (19 replies)
[–] AnitaAmandaHuginskis@lemmy.world 14 points 1 week ago* (last edited 1 week ago) (1 children)

I love XML, when it is properly utilized. Which, in most cases, it is not, unfortunately.

JSON > CSV though, I fucking hate CSV. I do not get the appeal. "It's easy to handle" -- NO, it is not. It's the "fuck whoever needs to handle this" of file "formats".

JSON is a reasonable middle ground, I'll give you that

[–] unique_hemp@discuss.tchncs.de 8 points 6 days ago (2 children)

CSV >>> JSON when dealing with large tabular data:

  1. Can be parsed row by row
  2. Does not repeat column names, more complicated (so slower) to parse

1 can be solved with JSONL, but 2 is unavoidable.

[–] entwine@programming.dev 2 points 6 days ago* (last edited 6 days ago) (1 children)
{
    "columns": ["id", "name", "age"],
    "rows": [
        [1, "bob", 44], [2, "alice", 7], ...
    ]
}

There ya go, problem solved without the unparseable ambiguity of CSV

Please stop using CSV.

[–] unique_hemp@discuss.tchncs.de 2 points 6 days ago (1 children)

Great, now read it row by row without keeping it all in memory.

[–] entwine@programming.dev 1 points 5 days ago

Wdym? That's a parser implementation detail. Even if the parser you're using needs to load the whole file into memory, it's trivial to write your own parser that reads those entries one row at a time. You could even add random access if you get creative.

That's one of the benefits of JSON: it is dead simple to parse.

[–] abruptly8951@lemmy.world 1 points 6 days ago (1 children)

Yes..but compression

And with csv you just gotta pray that you're parser parses the same as their writer..and that their writer was correctly implemented..and they set the settings correctly

[–] unique_hemp@discuss.tchncs.de 1 points 6 days ago (1 children)

Compression adds another layer of complexity for parsing.

JSON can also have configuration mismatch problems. Main one that comes to mind is case (in)sensitivity for keys.

[–] abruptly8951@lemmy.world 3 points 6 days ago

Nahh your nitpicking there, large csvs are gonna be compressed anyways

In practice I've never met a Json I cant parse, every second csv is unparseable

[–] lehenry@lemmy.world 8 points 1 week ago (4 children)

While I understand the critic about XPath and XSL, the fact that we have proper tools to query and tranform XML instead of the messy wat of getting specific information from JSON is also one of tge strong point of XML.

[–] deadbeef79000@lemmy.nz 7 points 1 week ago

XSLT and XPath are entirely underrated. They are seriously powerful tools.

While you can approximate XSLT with a heap of coffee and a JSON parser it's harder to keep it declarative.

load more comments (3 replies)
load more comments
view more: next ›