Verifiziertes AgentReady.md-Zertifikat
Ausgestellt am sig: 9d9f37d066e95579 Verifizieren →

Analysierte URL

https://www.jeronimo.dev/

Weitere URL analysieren

KI-Ready Score

76 / B

Gut

von 100

Token-Einsparung

HTML-Tokens 3880
Markdown-Tokens 892
Einsparung 77%

Score-Aufschlüsselung

Semantisches HTML 91/100
Inhaltseffizienz 86/100
KI-Auffindbarkeit 50/100
Strukturierte Daten 67/100
Zugänglichkeit 93/100

Ihre Website hat keine llms.txt-Datei. Dies ist der aufkommende Standard, um KI-Agenten beim Verständnis Ihrer Website-Struktur zu helfen.

So implementieren Sie es

Erstellen Sie eine /llms.txt-Datei gemäß der llmstxt.org-Spezifikation. Fügen Sie eine Beschreibung Ihrer Website und Links zu Ihren wichtigsten Seiten hinzu.

Ihre Website unterstützt kein Markdown for Agents. Dieser Cloudflare-Standard ermöglicht KI-Agenten, Inhalte im Markdown-Format anzufordern und reduziert den Token-Verbrauch um ~80%.

So implementieren Sie es

Implementieren Sie eines oder mehrere: (1) Auf Accept: text/markdown mit Markdown-Inhalt antworten. (2) .md-URLs bereitstellen (z.B. /seite.md). (3) <link rel="alternate" type="text/markdown">-Tags hinzufügen. (4) Link-HTTP-Header für Markdown-Erkennung hinzufügen.

Keine Content-Signal-Direktiven gefunden. Diese teilen KI-Agenten mit, wie sie Ihre Inhalte verwenden dürfen (Suchindexierung, KI-Eingabe, Trainingsdaten). Der empfohlene Ort ist robots.txt.

So implementieren Sie es

Fügen Sie Content-Signal zu Ihrer robots.txt hinzu: User-agent: *\nContent-Signal: search=yes, ai-input=yes, ai-train=no. Sie können es auch als HTTP-Header bei Markdown-Antworten hinzufügen.

Ihre Überschriftenstruktur hat Probleme (übersprungene Ebenen oder mehrere h1-Tags). Eine saubere Hierarchie hilft KI-Agenten, die Inhaltsorganisation zu verstehen.

So implementieren Sie es

Stellen Sie sicher, dass Sie genau ein <h1> pro Seite haben und die Überschriften einer sequentiellen Reihenfolge folgen: h1 > h2 > h3. Überspringen Sie keine Ebenen (z.B. h1 direkt zu h3).

Fehlende oder unvollständige Open-Graph-Tags. OG-Tags helfen KI-Agenten (und sozialen Plattformen), Titel, Beschreibung und Bild Ihrer Seite zu verstehen.

So implementieren Sie es

Fügen Sie og:title, og:description und og:image Meta-Tags zum <head> Ihrer Seite hinzu.

Markdown-Tokens: 892
Spring Batch is one of the few existing tools in the Java Enterprise ecosystem for building batch processes or data pipelines. However, its components (ItemReader/ItemWriter) are primarily oriented toward relational databases, CSV, XML, or JSON.

In a world where Data Lakes and columnar formats are increasingly important, integrating Parquet with Spring Batch opens new possibilities for building data pipelines from the Java world, without depending on complex solutions or different technology stacks that often cause friction in the Enterprise world.

This week I released a new version of [Carpet](https://github.com/jerolba/parquet-carpet), the Java library for working with Parquet files. In this version, I’ve added a feature that I believe nobody will ever use: **the ability to read and write BSON-type columns**.

A few days ago, the creators of DuckDB wrote the article: [Query Engines: Gatekeepers of the Parquet File Format](https://duckdb.org/2025/01/22/parquet-encodings.html), which explained how the engines that process Parquet files as SQL tables are blocking the evolution of the format. This is because those engines are not fully supporting the latest specification, and without this support, the rest of the ecosystem has no incentive to adopt it.

Apache Parquet is a columnar storage format optimized for analytical workloads, though it can also be used to store any type of structured data solving multiple use cases.

One of its most notable features is the ability to efficiently compress data using different compression techniques at two stages of its process. This reduces storage costs and improves reading performance.

This article explains file compression in Parquet for Java, provides usage examples, and analyzes its performance.

After some time working with Parquet files in Java using the Parquet Avro library, and studying how it worked, I concluded that despite **being very useful** in multiple use cases and having great potential, **the documentation and ecosystem needed for adoption in the Java world was very poor**.

Many people are using suboptimal solutions (CSV or JSON files), applying more complex solutions (Spark), or using languages they are not familiar with (Python) because they don’t know how to work with Parquet files easily. That’s why I decided to **write this [series of articles](https://www.jeronimo.dev/working-with-parquet-files-in-java/)**.

Once you understand it and have the examples, everything is easier. But, **can it be even easier?** Can we avoid the hassle of using *strange* libraries that serialize other formats? **Yes, it should be even easier.**

That’s why I decided to **implement an Open Source library** that makes working with Parquet from Java extremely simple, something that covers it: **Carpet**.

This post continues the series of articles about working with Parquet files in Java. This time, I’ll explain how to do it using the Protocol Buffers (PB) library.

Finding examples and documentation on how to use Parquet with Avro is challenging, but with **Protocol Buffers, it’s even more complicated**.

In the previous article, I wrote an introduction to using Parquet files in Java, but I did not include any examples. In this article, I will explain how to do this using the Avro library.

Parquet with Avro **is one of the most popular ways to work with Parquet files in Java** due to its simplicity, flexibility, and because it is the library with the most examples.

Parquet is a widely used format in the Data Engineering realm and holds significant potential for traditional Backend applications. This article serves as an **introduction to the format**, including some of the unique challenges I’ve faced while using it, to spare you from similar experiences.

In previous posts I’ve analyzed [Protocol Buffers](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) and [FlatBuffers](https://www.jeronimo.dev/java-serialization-with-flatbuffers/), using JSON as the baseline. In this post, I will analyze Apache Avro and compare it with the previously studied formats.

In the [previous post](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) I analyzed Protocol Buffers format, using JSON as baseline. In this post I’m going to analyze FlatBuffers and compare it with previously studied formats.
Spartan Blog - Jerónimo | Jerolba’s blog. Tech, JVM and random stuff.

[![Spartan Blog - Jerónimo](https://www.jeronimo.dev/images/spartan-helmet.png)](https://www.jeronimo.dev/ "Spartan Blog - Jerónimo")# [Spartan Blog - Jerónimo](https://www.jeronimo.dev/)

Jerolba's blog. Tech, JVM and random stuff.

### [Integrating Spring Batch with Parquet](https://www.jeronimo.dev/integrating-spring-batch-with-parquet/)

Spring Batch is one of the few existing tools in the Java Enterprise ecosystem for building batch processes or data pipelines. However, its components (ItemReader/ItemWriter) are primarily oriented toward relational databases, CSV, XML, or JSON.

In a world where Data Lakes and columnar formats are increasingly important, integrating Parquet with Spring Batch opens new possibilities for building data pipelines from the Java world, without depending on complex solutions or different technology stacks that often cause friction in the Enterprise world.

### [The Carpet feature that nobody will use](https://www.jeronimo.dev/the-carpet-feature-that-nobody-will-use/)

This week I released a new version of [Carpet](https://github.com/jerolba/parquet-carpet), the Java library for working with Parquet files. In this version, I’ve added a feature that I believe nobody will ever use: **the ability to read and write BSON-type columns**.

### [The two versions of Parquet](https://www.jeronimo.dev/the-two-versions-of-parquet/)

A few days ago, the creators of DuckDB wrote the article: [Query Engines: Gatekeepers of the Parquet File Format](https://duckdb.org/2025/01/22/parquet-encodings.html), which explained how the engines that process Parquet files as SQL tables are blocking the evolution of the format. This is because those engines are not fully supporting the latest specification, and without this support, the rest of the ecosystem has no incentive to adopt it.

### [Compression algorithms in Parquet](https://www.jeronimo.dev/compression-algorithms-parquet/)

Apache Parquet is a columnar storage format optimized for analytical workloads, though it can also be used to store any type of structured data solving multiple use cases.

One of its most notable features is the ability to efficiently compress data using different compression techniques at two stages of its process. This reduces storage costs and improves reading performance.

This article explains file compression in Parquet for Java, provides usage examples, and analyzes its performance.

### [Working with Parquet files in Java using Parquet Carpet](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-carpet/)

After some time working with Parquet files in Java using the Parquet Avro library, and studying how it worked, I concluded that despite **being very useful** in multiple use cases and having great potential, **the documentation and ecosystem needed for adoption in the Java world was very poor**.

Many people are using suboptimal solutions (CSV or JSON files), applying more complex solutions (Spark), or using languages they are not familiar with (Python) because they don’t know how to work with Parquet files easily. That’s why I decided to **write this [series of articles](https://www.jeronimo.dev/working-with-parquet-files-in-java/)**.

Once you understand it and have the examples, everything is easier. But, **can it be even easier?** Can we avoid the hassle of using *strange* libraries that serialize other formats? **Yes, it should be even easier.**

That’s why I decided to **implement an Open Source library** that makes working with Parquet from Java extremely simple, something that covers it: **Carpet**.

### [Working with Parquet files in Java using Protocol Buffers](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-protocol-buffers/)

This post continues the series of articles about working with Parquet files in Java. This time, I’ll explain how to do it using the Protocol Buffers (PB) library.

Finding examples and documentation on how to use Parquet with Avro is challenging, but with **Protocol Buffers, it’s even more complicated**.

### [Working with Parquet files in Java using Avro](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-avro/)

In the previous article, I wrote an introduction to using Parquet files in Java, but I did not include any examples. In this article, I will explain how to do this using the Avro library.

Parquet with Avro **is one of the most popular ways to work with Parquet files in Java** due to its simplicity, flexibility, and because it is the library with the most examples.

### [Working with Parquet files in Java](https://www.jeronimo.dev/working-with-parquet-files-in-java/)

Parquet is a widely used format in the Data Engineering realm and holds significant potential for traditional Backend applications. This article serves as an **introduction to the format**, including some of the unique challenges I’ve faced while using it, to spare you from similar experiences.

### [Java Serialization with Apache Avro](https://www.jeronimo.dev/java-serialization-with-avro/)

In previous posts I’ve analyzed [Protocol Buffers](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) and [FlatBuffers](https://www.jeronimo.dev/java-serialization-with-flatbuffers/), using JSON as the baseline. In this post, I will analyze Apache Avro and compare it with the previously studied formats.

### [Java Serialization with Flatbuffers](https://www.jeronimo.dev/java-serialization-with-flatbuffers/)

In the [previous post](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) I analyzed Protocol Buffers format, using JSON as baseline. In this post I’m going to analyze FlatBuffers and compare it with previously studied formats.

Laden Sie diese Datei als /index.md auf Ihren Server hoch, damit KI-Agenten auf eine saubere Version Ihrer Seite zugreifen können. Sie können auch die Accept: text/markdown-Inhaltsverhandlung konfigurieren, um sie automatisch auszuliefern.

Generierte llms.txt für diese einzelne Seite

llms.txt herunterladen
# Spartan Blog - Jerónimo

> Jerolba’s blog. Tech, JVM and random stuff.

## Main
- [Spartan Blog - Jerónimo](https://www.jeronimo.dev/): Jerolba’s blog. Tech, JVM and random stuff.
- [Integrating Spring Batch with Parquet](https://www.jeronimo.dev/integrating-spring-batch-with-parquet/)
- [The Carpet feature that nobody will use](https://www.jeronimo.dev/the-carpet-feature-that-nobody-will-use/)
- [The two versions of Parquet](https://www.jeronimo.dev/the-two-versions-of-parquet/)
- [Compression algorithms in Parquet](https://www.jeronimo.dev/compression-algorithms-parquet/)
- [Working with Parquet files in Java using Parquet Carpet](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-carpet/)
- [Working with Parquet files in Java using Protocol Buffers](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-protocol-buffers/)
- [Working with Parquet files in Java using Avro](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-avro/)
- [Working with Parquet files in Java](https://www.jeronimo.dev/working-with-parquet-files-in-java/)

Vollständige llms.txt erfordert eine domainweite Analyse (kommt bald)

Laden Sie diese Datei als https://www.jeronimo.dev/llms.txt im Stammverzeichnis Ihrer Domain hoch. KI-Agenten wie ChatGPT, Claude und Perplexity prüfen diese Datei, um Ihre Website-Struktur zu verstehen.

Semantisches HTML

Verwendet article- oder main-Element (100/100)

Has both <article> and <main>

Korrekte Überschriftenhierarchie (65/100)

2 <h1> elements (should be 1), 1 heading level skip(s)

Verwendet semantische HTML-Elemente (100/100)

46 semantic elements, 15 divs (ratio: 75%)

Aussagekräftige Bild-Alt-Texte (100/100)

1/1 images with meaningful alt text

Geringe div-Verschachtelungstiefe (100/100)

Avg div depth: 2.2, max: 3

Inhaltseffizienz

Gutes Token-Reduktionsverhältnis (80/100)

77% token reduction (HTML→Markdown)

Gutes Inhalt-zu-Rausch-Verhältnis (80/100)

Content ratio: 29.2% (4359 content chars / 14947 HTML bytes)

Minimale Inline-Styles (100/100)

0/175 elements with inline styles (0.0%)

Angemessenes Seitengewicht (100/100)

HTML size: 15KB

KI-Auffindbarkeit

Hat llms.txt-Datei (0/100)

No llms.txt found

Hat robots.txt-Datei (100/100)

robots.txt exists

robots.txt erlaubt KI-Bots (100/100)

All major AI bots allowed

Hat sitemap.xml (100/100)

Sitemap found

Markdown for Agents Unterstützung (0/100)

No markdown content negotiation

Hat Content-Signal (robots.txt oder HTTP-Header) (0/100)

No Content-Signal header

Strukturierte Daten

Hat Schema.org / JSON-LD (50/100)

JSON-LD found but basic types: WebSite

Hat Open-Graph-Tags (67/100)

2/3 OG tags present

Hat Meta-Beschreibung (50/100)

Meta description too short: 43 chars

Hat kanonische URL (100/100)

Canonical URL present

Hat lang-Attribut (100/100)

lang="en-US"

Zugänglichkeit

Inhalt ohne JavaScript verfügbar (100/100)

Content available without JavaScript

Angemessene Seitengröße (100/100)

Page size: 15KB

Inhalt erscheint früh im HTML (75/100)

Main content starts at 22% of HTML

{
  "url": "https://www.jeronimo.dev/",
  "timestamp": 1771156368416,
  "fetch": {
    "mode": "simple",
    "timeMs": 136,
    "htmlSizeBytes": 14947,
    "supportsMarkdown": false,
    "statusCode": 200
  },
  "extraction": {
    "title": "Spartan Blog - Jerónimo",
    "excerpt": "Jerolba’s blog. Tech, JVM and random stuff.",
    "byline": "Jerónimo López",
    "siteName": "Spartan Blog - Jerónimo",
    "lang": "en-US",
    "contentLength": 4359,
    "metadata": {
      "description": "Jerolba’s blog. Tech, JVM and random stuff.",
      "ogTitle": "Spartan Blog - Jerónimo",
      "ogDescription": "Jerolba’s blog. Tech, JVM and random stuff.",
      "ogImage": null,
      "ogType": "website",
      "canonical": "https://www.jeronimo.dev/",
      "lang": "en-US",
      "schemas": [
        {
          "@context": "https://schema.org",
          "@type": "WebSite",
          "author": {
            "@type": "Person",
            "name": "Jerónimo López"
          },
          "description": "Jerolba’s blog. Tech, JVM and random stuff.",
          "headline": "Spartan Blog - Jerónimo",
          "name": "Spartan Blog - Jerónimo",
          "publisher": {
            "@type": "Organization",
            "logo": {
              "@type": "ImageObject",
              "url": "https://www.jeronimo.dev/images/spartan-helmet.png"
            },
            "name": "Jerónimo López"
          },
          "url": "https://www.jeronimo.dev/"
        }
      ],
      "robotsMeta": null,
      "author": "Jerónimo López",
      "generator": "Jekyll v3.8.7"
    }
  },
  "markdown": "Spring Batch is one of the few existing tools in the Java Enterprise ecosystem for building batch processes or data pipelines. However, its components (ItemReader/ItemWriter) are primarily oriented toward relational databases, CSV, XML, or JSON.\n\nIn a world where Data Lakes and columnar formats are increasingly important, integrating Parquet with Spring Batch opens new possibilities for building data pipelines from the Java world, without depending on complex solutions or different technology stacks that often cause friction in the Enterprise world.\n\nThis week I released a new version of [Carpet](https://github.com/jerolba/parquet-carpet), the Java library for working with Parquet files. In this version, I’ve added a feature that I believe nobody will ever use: **the ability to read and write BSON-type columns**.\n\nA few days ago, the creators of DuckDB wrote the article: [Query Engines: Gatekeepers of the Parquet File Format](https://duckdb.org/2025/01/22/parquet-encodings.html), which explained how the engines that process Parquet files as SQL tables are blocking the evolution of the format. This is because those engines are not fully supporting the latest specification, and without this support, the rest of the ecosystem has no incentive to adopt it.\n\nApache Parquet is a columnar storage format optimized for analytical workloads, though it can also be used to store any type of structured data solving multiple use cases.\n\nOne of its most notable features is the ability to efficiently compress data using different compression techniques at two stages of its process. This reduces storage costs and improves reading performance.\n\nThis article explains file compression in Parquet for Java, provides usage examples, and analyzes its performance.\n\nAfter some time working with Parquet files in Java using the Parquet Avro library, and studying how it worked, I concluded that despite **being very useful** in multiple use cases and having great potential, **the documentation and ecosystem needed for adoption in the Java world was very poor**.\n\nMany people are using suboptimal solutions (CSV or JSON files), applying more complex solutions (Spark), or using languages they are not familiar with (Python) because they don’t know how to work with Parquet files easily. That’s why I decided to **write this [series of articles](https://www.jeronimo.dev/working-with-parquet-files-in-java/)**.\n\nOnce you understand it and have the examples, everything is easier. But, **can it be even easier?** Can we avoid the hassle of using *strange* libraries that serialize other formats? **Yes, it should be even easier.**\n\nThat’s why I decided to **implement an Open Source library** that makes working with Parquet from Java extremely simple, something that covers it: **Carpet**.\n\nThis post continues the series of articles about working with Parquet files in Java. This time, I’ll explain how to do it using the Protocol Buffers (PB) library.\n\nFinding examples and documentation on how to use Parquet with Avro is challenging, but with **Protocol Buffers, it’s even more complicated**.\n\nIn the previous article, I wrote an introduction to using Parquet files in Java, but I did not include any examples. In this article, I will explain how to do this using the Avro library.\n\nParquet with Avro **is one of the most popular ways to work with Parquet files in Java** due to its simplicity, flexibility, and because it is the library with the most examples.\n\nParquet is a widely used format in the Data Engineering realm and holds significant potential for traditional Backend applications. This article serves as an **introduction to the format**, including some of the unique challenges I’ve faced while using it, to spare you from similar experiences.\n\nIn previous posts I’ve analyzed [Protocol Buffers](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) and [FlatBuffers](https://www.jeronimo.dev/java-serialization-with-flatbuffers/), using JSON as the baseline. In this post, I will analyze Apache Avro and compare it with the previously studied formats.\n\nIn the [previous post](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) I analyzed Protocol Buffers format, using JSON as baseline. In this post I’m going to analyze FlatBuffers and compare it with previously studied formats.\n",
  "fullPageMarkdown": "Spartan Blog - Jerónimo | Jerolba’s blog. Tech, JVM and random stuff.\n\n[![Spartan Blog - Jerónimo](https://www.jeronimo.dev/images/spartan-helmet.png)](https://www.jeronimo.dev/ \"Spartan Blog - Jerónimo\")# [Spartan Blog - Jerónimo](https://www.jeronimo.dev/)\n\nJerolba's blog. Tech, JVM and random stuff.\n\n### [Integrating Spring Batch with Parquet](https://www.jeronimo.dev/integrating-spring-batch-with-parquet/)\n\nSpring Batch is one of the few existing tools in the Java Enterprise ecosystem for building batch processes or data pipelines. However, its components (ItemReader/ItemWriter) are primarily oriented toward relational databases, CSV, XML, or JSON.\n\nIn a world where Data Lakes and columnar formats are increasingly important, integrating Parquet with Spring Batch opens new possibilities for building data pipelines from the Java world, without depending on complex solutions or different technology stacks that often cause friction in the Enterprise world.\n\n### [The Carpet feature that nobody will use](https://www.jeronimo.dev/the-carpet-feature-that-nobody-will-use/)\n\nThis week I released a new version of [Carpet](https://github.com/jerolba/parquet-carpet), the Java library for working with Parquet files. In this version, I’ve added a feature that I believe nobody will ever use: **the ability to read and write BSON-type columns**.\n\n### [The two versions of Parquet](https://www.jeronimo.dev/the-two-versions-of-parquet/)\n\nA few days ago, the creators of DuckDB wrote the article: [Query Engines: Gatekeepers of the Parquet File Format](https://duckdb.org/2025/01/22/parquet-encodings.html), which explained how the engines that process Parquet files as SQL tables are blocking the evolution of the format. This is because those engines are not fully supporting the latest specification, and without this support, the rest of the ecosystem has no incentive to adopt it.\n\n### [Compression algorithms in Parquet](https://www.jeronimo.dev/compression-algorithms-parquet/)\n\nApache Parquet is a columnar storage format optimized for analytical workloads, though it can also be used to store any type of structured data solving multiple use cases.\n\nOne of its most notable features is the ability to efficiently compress data using different compression techniques at two stages of its process. This reduces storage costs and improves reading performance.\n\nThis article explains file compression in Parquet for Java, provides usage examples, and analyzes its performance.\n\n### [Working with Parquet files in Java using Parquet Carpet](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-carpet/)\n\nAfter some time working with Parquet files in Java using the Parquet Avro library, and studying how it worked, I concluded that despite **being very useful** in multiple use cases and having great potential, **the documentation and ecosystem needed for adoption in the Java world was very poor**.\n\nMany people are using suboptimal solutions (CSV or JSON files), applying more complex solutions (Spark), or using languages they are not familiar with (Python) because they don’t know how to work with Parquet files easily. That’s why I decided to **write this [series of articles](https://www.jeronimo.dev/working-with-parquet-files-in-java/)**.\n\nOnce you understand it and have the examples, everything is easier. But, **can it be even easier?** Can we avoid the hassle of using *strange* libraries that serialize other formats? **Yes, it should be even easier.**\n\nThat’s why I decided to **implement an Open Source library** that makes working with Parquet from Java extremely simple, something that covers it: **Carpet**.\n\n### [Working with Parquet files in Java using Protocol Buffers](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-protocol-buffers/)\n\nThis post continues the series of articles about working with Parquet files in Java. This time, I’ll explain how to do it using the Protocol Buffers (PB) library.\n\nFinding examples and documentation on how to use Parquet with Avro is challenging, but with **Protocol Buffers, it’s even more complicated**.\n\n### [Working with Parquet files in Java using Avro](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-avro/)\n\nIn the previous article, I wrote an introduction to using Parquet files in Java, but I did not include any examples. In this article, I will explain how to do this using the Avro library.\n\nParquet with Avro **is one of the most popular ways to work with Parquet files in Java** due to its simplicity, flexibility, and because it is the library with the most examples.\n\n### [Working with Parquet files in Java](https://www.jeronimo.dev/working-with-parquet-files-in-java/)\n\nParquet is a widely used format in the Data Engineering realm and holds significant potential for traditional Backend applications. This article serves as an **introduction to the format**, including some of the unique challenges I’ve faced while using it, to spare you from similar experiences.\n\n### [Java Serialization with Apache Avro](https://www.jeronimo.dev/java-serialization-with-avro/)\n\nIn previous posts I’ve analyzed [Protocol Buffers](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) and [FlatBuffers](https://www.jeronimo.dev/java-serialization-with-flatbuffers/), using JSON as the baseline. In this post, I will analyze Apache Avro and compare it with the previously studied formats.\n\n### [Java Serialization with Flatbuffers](https://www.jeronimo.dev/java-serialization-with-flatbuffers/)\n\nIn the [previous post](https://www.jeronimo.dev/java-serialization-with-protocol-buffers/) I analyzed Protocol Buffers format, using JSON as baseline. In this post I’m going to analyze FlatBuffers and compare it with previously studied formats.\n",
  "markdownStats": {
    "images": 0,
    "links": 6,
    "tables": 0,
    "codeBlocks": 0,
    "headings": 0
  },
  "tokens": {
    "htmlTokens": 3880,
    "markdownTokens": 892,
    "reduction": 2988,
    "reductionPercent": 77
  },
  "score": {
    "score": 76,
    "grade": "B",
    "dimensions": {
      "semanticHtml": {
        "score": 91,
        "weight": 20,
        "grade": "A",
        "checks": {
          "uses_article_or_main": {
            "score": 100,
            "weight": 20,
            "details": "Has both <article> and <main>"
          },
          "proper_heading_hierarchy": {
            "score": 65,
            "weight": 25,
            "details": "2 <h1> elements (should be 1), 1 heading level skip(s)"
          },
          "semantic_elements": {
            "score": 100,
            "weight": 20,
            "details": "46 semantic elements, 15 divs (ratio: 75%)"
          },
          "meaningful_alt_texts": {
            "score": 100,
            "weight": 15,
            "details": "1/1 images with meaningful alt text"
          },
          "low_div_nesting": {
            "score": 100,
            "weight": 20,
            "details": "Avg div depth: 2.2, max: 3"
          }
        }
      },
      "contentEfficiency": {
        "score": 86,
        "weight": 25,
        "grade": "B",
        "checks": {
          "token_reduction_ratio": {
            "score": 80,
            "weight": 40,
            "details": "77% token reduction (HTML→Markdown)"
          },
          "content_to_noise_ratio": {
            "score": 80,
            "weight": 30,
            "details": "Content ratio: 29.2% (4359 content chars / 14947 HTML bytes)"
          },
          "minimal_inline_styles": {
            "score": 100,
            "weight": 15,
            "details": "0/175 elements with inline styles (0.0%)"
          },
          "reasonable_page_weight": {
            "score": 100,
            "weight": 15,
            "details": "HTML size: 15KB"
          }
        }
      },
      "aiDiscoverability": {
        "score": 50,
        "weight": 25,
        "grade": "D",
        "checks": {
          "has_llms_txt": {
            "score": 0,
            "weight": 25,
            "details": "No llms.txt found"
          },
          "has_robots_txt": {
            "score": 100,
            "weight": 15,
            "details": "robots.txt exists"
          },
          "robots_allows_ai_bots": {
            "score": 100,
            "weight": 20,
            "details": "All major AI bots allowed"
          },
          "has_sitemap": {
            "score": 100,
            "weight": 15,
            "details": "Sitemap found"
          },
          "supports_markdown_negotiation": {
            "score": 0,
            "weight": 15,
            "details": "No markdown content negotiation"
          },
          "has_content_signals": {
            "score": 0,
            "weight": 10,
            "details": "No Content-Signal header"
          }
        }
      },
      "structuredData": {
        "score": 67,
        "weight": 15,
        "grade": "C",
        "checks": {
          "has_schema_org": {
            "score": 50,
            "weight": 30,
            "details": "JSON-LD found but basic types: WebSite"
          },
          "has_open_graph": {
            "score": 67,
            "weight": 25,
            "details": "2/3 OG tags present"
          },
          "has_meta_description": {
            "score": 50,
            "weight": 20,
            "details": "Meta description too short: 43 chars"
          },
          "has_canonical_url": {
            "score": 100,
            "weight": 15,
            "details": "Canonical URL present"
          },
          "has_lang_attribute": {
            "score": 100,
            "weight": 10,
            "details": "lang=\"en-US\""
          }
        }
      },
      "accessibility": {
        "score": 93,
        "weight": 15,
        "grade": "A",
        "checks": {
          "content_without_js": {
            "score": 100,
            "weight": 40,
            "details": "Content available without JavaScript"
          },
          "reasonable_page_size": {
            "score": 100,
            "weight": 30,
            "details": "Page size: 15KB"
          },
          "fast_content_position": {
            "score": 75,
            "weight": 30,
            "details": "Main content starts at 22% of HTML"
          }
        }
      }
    }
  },
  "recommendations": [
    {
      "id": "add_llms_txt",
      "priority": "critical",
      "category": "aiDiscoverability",
      "titleKey": "rec.add_llms_txt.title",
      "descriptionKey": "rec.add_llms_txt.description",
      "howToKey": "rec.add_llms_txt.howto",
      "effort": "quick-win",
      "estimatedImpact": 10,
      "checkScore": 0,
      "checkDetails": "No llms.txt found"
    },
    {
      "id": "add_markdown_negotiation",
      "priority": "critical",
      "category": "aiDiscoverability",
      "titleKey": "rec.add_markdown_negotiation.title",
      "descriptionKey": "rec.add_markdown_negotiation.description",
      "howToKey": "rec.add_markdown_negotiation.howto",
      "effort": "significant",
      "estimatedImpact": 4,
      "checkScore": 0,
      "checkDetails": "No markdown content negotiation"
    },
    {
      "id": "add_content_signals",
      "priority": "critical",
      "category": "aiDiscoverability",
      "titleKey": "rec.add_content_signals.title",
      "descriptionKey": "rec.add_content_signals.description",
      "howToKey": "rec.add_content_signals.howto",
      "effort": "moderate",
      "estimatedImpact": 3,
      "checkScore": 0,
      "checkDetails": "No Content-Signal header"
    },
    {
      "id": "fix_heading_hierarchy",
      "priority": "medium",
      "category": "semanticHtml",
      "titleKey": "rec.fix_heading_hierarchy.title",
      "descriptionKey": "rec.fix_heading_hierarchy.description",
      "howToKey": "rec.fix_heading_hierarchy.howto",
      "effort": "quick-win",
      "estimatedImpact": 6,
      "checkScore": 65,
      "checkDetails": "2 <h1> elements (should be 1), 1 heading level skip(s)"
    },
    {
      "id": "add_open_graph",
      "priority": "medium",
      "category": "structuredData",
      "titleKey": "rec.add_open_graph.title",
      "descriptionKey": "rec.add_open_graph.description",
      "howToKey": "rec.add_open_graph.howto",
      "effort": "quick-win",
      "estimatedImpact": 4,
      "checkScore": 67,
      "checkDetails": "2/3 OG tags present"
    }
  ],
  "llmsTxtPreview": "# Spartan Blog - Jerónimo\n\n> Jerolba’s blog. Tech, JVM and random stuff.\n\n## Main\n- [Spartan Blog - Jerónimo](https://www.jeronimo.dev/): Jerolba’s blog. Tech, JVM and random stuff.\n- [Integrating Spring Batch with Parquet](https://www.jeronimo.dev/integrating-spring-batch-with-parquet/)\n- [The Carpet feature that nobody will use](https://www.jeronimo.dev/the-carpet-feature-that-nobody-will-use/)\n- [The two versions of Parquet](https://www.jeronimo.dev/the-two-versions-of-parquet/)\n- [Compression algorithms in Parquet](https://www.jeronimo.dev/compression-algorithms-parquet/)\n- [Working with Parquet files in Java using Parquet Carpet](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-carpet/)\n- [Working with Parquet files in Java using Protocol Buffers](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-protocol-buffers/)\n- [Working with Parquet files in Java using Avro](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-avro/)\n- [Working with Parquet files in Java](https://www.jeronimo.dev/working-with-parquet-files-in-java/)\n\n",
  "llmsTxtExisting": null,
  "snippets": [
    {
      "id": "add_llms_txt",
      "title": "Create /llms.txt",
      "description": "Upload this file to your web root. It tells AI agents what your site is about and which pages matter.",
      "language": "markdown",
      "code": "# Spartan Blog - Jerónimo\n\n> Jerolba’s blog. Tech, JVM and random stuff.\n\n## Main\n- [Spartan Blog - Jerónimo](https://www.jeronimo.dev/): Jerolba’s blog. Tech, JVM and random stuff.\n- [Integrating Spring Batch with Parquet](https://www.jeronimo.dev/integrating-spring-batch-with-parquet/)\n- [The Carpet feature that nobody will use](https://www.jeronimo.dev/the-carpet-feature-that-nobody-will-use/)\n- [The two versions of Parquet](https://www.jeronimo.dev/the-two-versions-of-parquet/)\n- [Compression algorithms in Parquet](https://www.jeronimo.dev/compression-algorithms-parquet/)\n- [Working with Parquet files in Java using Parquet Carpet](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-carpet/)\n- [Working with Parquet files in Java using Protocol Buffers](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-protocol-buffers/)\n- [Working with Parquet files in Java using Avro](https://www.jeronimo.dev/working-with-parquet-files-in-java-using-avro/)\n- [Working with Parquet files in Java](https://www.jeronimo.dev/working-with-parquet-files-in-java/)\n\n",
      "filename": "/llms.txt"
    },
    {
      "id": "fix_heading_hierarchy",
      "title": "Fix heading hierarchy",
      "description": "Your page has 2 <h1> elements. Keep only one. Demote the rest to <h2>.",
      "language": "html",
      "code": "<!-- Keep only one <h1> per page -->\n<h1>Spartan Blog - Jerónimo</h1>",
      "filename": "<main> or <article>"
    },
    {
      "id": "add_open_graph",
      "title": "Add missing Open Graph tags",
      "description": "These tags control how your page looks when shared on social media and some AI platforms.",
      "language": "html",
      "code": "<meta property=\"og:image\" content=\"https://yoursite.com/og-image.jpg\">\n<meta property=\"og:url\" content=\"https://www.jeronimo.dev/\">\n<meta property=\"og:type\" content=\"website\">",
      "filename": "<head>"
    },
    {
      "id": "add_content_signals",
      "title": "Add Content-Signal HTTP header",
      "description": "The Content-Signal header tells AI agents about the nature of your content. Add it via your web server or CDN.",
      "language": "nginx",
      "code": "# Nginx — add to your server block:\nadd_header Content-Signal \"type=website; lang=en-US\" always;\n\n# Apache — add to .htaccess:\n# Header set Content-Signal \"type=website; lang=en-US\"",
      "filename": "nginx.conf or .htaccess"
    },
    {
      "id": "add_markdown_negotiation",
      "title": "Support Accept: text/markdown",
      "description": "When a client sends Accept: text/markdown, respond with a Markdown version of the page. This is the gold standard for AI-readiness.",
      "language": "nginx",
      "code": "# Nginx — serve .md files when client requests Markdown:\n# Option 1: Serve pre-generated .md files\nmap $http_accept $markdown_suffix {\n  default \"\";\n  \"~text/markdown\" \".md\";\n}\n\n# Then in your location block:\ntry_files $uri$markdown_suffix $uri =404;\n\n# Option 2: Use your app framework to check the Accept header\n# and return Markdown content with Content-Type: text/markdown",
      "filename": "nginx.conf or application code"
    }
  ]
}

Nutzen Sie unsere API, um dies programmatisch abzurufen (kommt bald)

Dieses JSON ist für den internen Gebrauch bestimmt — im Gegensatz zu den Markdown- und llms.txt-Dateien soll es nicht auf Ihre Website hochgeladen werden. Speichern Sie es als Ausgangswert, um Ihren Score im Zeitverlauf zu verfolgen, teilen Sie es mit Ihrem Entwicklerteam oder integrieren Sie es in Ihre CI/CD-Pipeline.

Teilen Sie Ihre Ergebnisse

Twitter LinkedIn

Badge einbetten

Fügen Sie dieses Badge zu Ihrer Website hinzu. Es aktualisiert sich automatisch, wenn sich Ihr KI-Bereitschafts-Score ändert.

AgentReady.md score for www.jeronimo.dev
Script Empfohlen
<script src="https://agentready.md/badge.js" data-id="e2a5c805-749c-45e5-953d-a16464f3ebcc" data-domain="www.jeronimo.dev"></script>
Markdown
[![AgentReady.md score for www.jeronimo.dev](https://agentready.md/badge/www.jeronimo.dev.svg)](https://agentready.md/de/r/e2a5c805-749c-45e5-953d-a16464f3ebcc)

Demnächst: Vollständige Domain-Analyse

Crawlen Sie Ihre gesamte Domain, generieren Sie llms.txt und überwachen Sie Ihren KI-Bereitschaftswert im Zeitverlauf. Tragen Sie sich in die Warteliste ein.

Sie stehen auf der Liste! Wir benachrichtigen Sie, sobald es verfügbar ist.