Why Our Team Migrated from Bitbucket Data Center to Gitea Enterprise

cover

In the software development field, most people are no strangers to Git—the world’s most popular version control system and a foundational tool for modern collaborative development. And when we talk about Git, we can’t help but think of GitHub, the largest and most well-known open-source software platform today.

However, for many private companies or small to mid-sized teams, GitHub may not be an option due to security, cost, deployment strategies, or regulatory requirements. In such cases, what tools can serve as an internal Git repository platform? The most common choices include GitLab and Gitea, which is the focus of this article.

For some teams, Gitea might still be relatively unfamiliar. Simply put, Gitea is a lightweight, self-hosted Git platform written in Go, providing GitHub-like capabilities such as code hosting, permission management, Issues and Pull Requests, and CI/CD. You can find a more comprehensive explanation in the official documentation (Gitea Documentation). It’s cross-platform, easy to deploy, and low-maintenance, which is why it has been increasingly favored by small and medium-sized teams.

The main purpose of this article is to share why our team ultimately decided to migrate from Bitbucket Data Center to Gitea—and why we didn’t choose a more feature-rich but comparatively heavier open-source solution like GitLab.

[Read More]

為什麼團隊從 Bitbucket Data Center 版本轉向 Gitea 企業版

cover

相信在軟體開發領域,大家對 Git 應該都不陌生——這套全球最受歡迎的版本控制系統已成為現代協作開發的基礎工具。而提到 Git,就不得不想到目前全球最大且最知名的開源軟體平台 GitHub

但在許多私人企業或中小型團隊中,如果因為安全性、成本、部署策略或法規需求等原因,而不能直接採用 GitHub,那麼有哪些工具可以作為企業內部的 Git 版本庫平台呢?最常見的選擇包含 GitLab 以及本篇將深入探討的 Gitea

對部分團隊而言,Gitea 可能還相對陌生。簡單來說,Gitea 是一套以 Go 語言打造的極輕量、自架型 Git 平台,提供與 GitHub 類似的功能,例如程式碼託管、權限管理、Issue 與 Pull Request、CI/CD 等能力。 你也可以在官方文件中看到更完整說明(Gitea Documentation)。它跨平台、容易部署,且維護成本低,因此逐漸受到中小型團隊青睞。

本篇文章的主軸,就是要和大家分享: 為什麼我們團隊最終選擇從 Bitbucket Data Center 遷移到 Gitea?又為什麼沒有選擇 GitLab 這類功能更完整、但相對較為沉重的開源方案?

[Read More]

Building AI-Powered GitHub Workflows: A Complete Guide to LLM Action

blog cover

In the AI era, integrating Large Language Models into CI/CD pipelines has become crucial for improving development efficiency. However, existing solutions are often tied to specific service providers, and LLM outputs are typically unstructured free-form text that is difficult to parse and use reliably in automated workflows. LLM Action was created to solve these pain points.

The core feature is support for Tool Schema structured output—you can predefine a JSON Schema to force LLM responses to conform to a specified format. This means AI no longer just returns a block of text, but produces predictable, parseable structured data. Each field is automatically converted into GitHub Actions output variables, allowing subsequent steps to use them directly without additional string parsing or regex processing. This completely solves the problem of unstable LLM output that is difficult to integrate into automated workflows.

Additionally, LLM Action provides a unified interface to connect to any OpenAI-compatible service, whether it’s cloud-based OpenAI, Azure OpenAI, or locally deployed self-hosted solutions like Ollama, LocalAI, LM Studio, or vLLM—all can be seamlessly switched.

Practical use cases include:

  • Automated Code Review: Define a Schema to output fields like score, issues, suggestions, directly used to determine whether the review passes
  • PR Summary Generation: Structured output of title, summary, breaking_changes for automatic PR description updates
  • Issue Classification: Output category, priority, labels to automatically tag Issues
  • Release Notes: Generate arrays of features, bugfixes, breaking to automatically compose formatted release notes
  • Multi-language Translation: Batch output multiple language fields, completing multi-language translation in a single API call

Through Schema definition, LLM Action transforms AI output from “unpredictable text” to “programmable data,” truly enabling end-to-end AI automated workflows.

[Read More]

打造 AI 驅動的 GitHub 工作流程:LLM Action 完整指南

blog cover

在 AI 時代,將大型語言模型整合進 CI/CD 流程已成為提升開發效率的關鍵。然而,現有的解決方案往往綁定特定服務商,且 LLM 的輸出通常是非結構化的自由文字,難以在自動化流程中可靠地解析與使用。LLM Action 的誕生正是為了解決這些痛點。

最核心的特色是支援 Tool Schema 結構化輸出——你可以預先定義 JSON Schema,讓 LLM 的回應強制符合指定格式。這意味著 AI 不再只是回傳一段文字,而是產出可預測、可解析的結構化資料,每個欄位都會自動轉換為 GitHub Actions 的輸出變數,讓後續步驟能直接取用,無需額外的字串解析或正則表達式處理。這徹底解決了 LLM 輸出不穩定、難以整合進自動化流程的問題。

此外,LLM Action 提供統一介面串接任何 OpenAI 相容的服務,無論是雲端的 OpenAI、Azure OpenAI,還是本地部署的 Ollama、LocalAI、LM Studio、vLLM 等自託管方案,都能無縫切換。

實際應用場景包括:

  • 自動化 Code Review:定義 Schema 輸出 scoreissuessuggestions 等欄位,直接用於判斷是否通過審查
  • PR 摘要生成:結構化輸出 titlesummarybreaking_changes 供後續自動更新 PR 描述
  • Issue 分類:輸出 categoryprioritylabels 自動為 Issue 加上標籤
  • Release Notes:產出 featuresbugfixesbreaking 陣列,自動組成格式化的發布說明
  • 多語言翻譯:批次輸出多個語言欄位,一次 API 呼叫完成多語系翻譯

透過 Schema 定義,LLM Action 讓 AI 輸出從「不可預測的文字」變成「可程式化的資料」,真正實現端到端的 AI 自動化工作流程。

[Read More]