1. 多维表格简介

以下是近半年仍在维护的具有多维表格特征的开源项目,为了清晰地了解它们的特性和未来开发方向,当然是把slogan列出来。可以看出,这些项目大多以“Airtable alternative”和“No-Code”自居。但不管他们的口号如何,我们还是以一个“多维表格”的视角来作评估选择。

  • NocoDB ,Open source Airtable alternative.
  • Baserow ,An open source no-code database and Airtable alternative. Create your own database without technical experience. Our user friendly no-code tool gives you the powers of a developer without leaving your browser.
  • Teable ,AI No-code Database, Productivity Super Boost.
  • APITable , An API-oriented low-code platform for building collaborative apps and better than all other Airtable open-source alternatives.
  • RowyLow-code backend platform. Manage database on spreadsheet-like UI and build cloud functions workflows in JS/TS, all in your browser.
  • undb ,Private first self-hosted no code database & BaaS.

除了上述几款产品外,在开源社区我们还能看到以下几款具有“表格”形态的项目,Grist更像Excel表格,支持Excel的公式;Mathesar偏向于Postgres数据库的可视化工具;NocoBase则更多地服务于低代码应用构建。

  • Grist ,A modern, open source spreadsheet that goes beyond the grid.
  • Mathesar ,Secure, spreadsheet-like tool for Postgres data.
  • NocoBase ,Extensibility-first open-source no-code platform.

综合来看,NocoDB,Baserow,Teable 和 APITable 是最像多维表格、功能相对完善、Bug相对较少的的几款产品。本文只关注这四个产品。( Rowy 依赖 Firebase,无法做到完全的自部署,undb 目前有不少基础功能的缺失,使用时也有不少 bug,暂不做推荐。)

1.1. 多维表格特性说明

一些关于多维表格的基本特性说明。在一个多维表格里,用户端的交互层级由上至下是“Workspace/Space/空间/工作区”->“Database/数据库”→“Grid/表格”→“Views视图”,表格内表头一般称为“Field/字段”。一张表格可以演化为多个视图,表格间可以通过“链接”、“Lookup”、“Rollup”相互引用数据。后端可以通过API访问用户的所有数据。软件提供商一般也会围绕这些核心功能做一些用量限制以刺激增值服务。

2. baserow 部署

2.1. 创建目录

mkdir -pv /data/baserow/{caddy_config,caddy_data,media,pgdata}

2.2. .env

仅列出初步运行所需更改的部分

SECRET_KEY=lf6ra1wsz4e5ljw0dylsltryws2ioer6g1neffhfuuewynvsee
DATABASE_PASSWORD=27hd9rvxebg1b9m0lxfv8r7qjq7zdytaillg6n2omvvrnk8t1j
REDIS_PASSWORD=e3b0da8qcvpg5b8z35zptc19vpsatd2dlivme0ya8vsqwjeaf5`

# To increase the security of the SECRET_KEY, you should also set
# BASEROW_JWT_SIGNING_KEY=

# The browser URL you will access Baserow with. Used to connect to the api, generate emails, etc.
BASEROW_PUBLIC_URL=http://192.168.182.100
......

2.3. baserow.yml

version: "3.4"
# See https://baserow.io/docs/installation%2Fconfiguration for more details on these
# backend environment variables, their defaults if left blank etc.
x-backend-variables: &backend-variables
  # Most users should only need to set these first four variables.
  SECRET_KEY: ${SECRET_KEY:?}
  BASEROW_JWT_SIGNING_KEY: ${BASEROW_JWT_SIGNING_KEY:-}
  DATABASE_PASSWORD: ${DATABASE_PASSWORD:?}
  REDIS_PASSWORD: ${REDIS_PASSWORD:?}
  # If you manually change this line make sure you also change the duplicate line in
  # the web-frontend service.
  BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}

  # Set these if you want to use an external postgres instead of the db service below.
  DATABASE_USER: ${DATABASE_USER:-baserow}
  DATABASE_NAME: ${DATABASE_NAME:-baserow}
  DATABASE_HOST:
  DATABASE_PORT:
  DATABASE_OPTIONS:
  DATABASE_URL:

  # Set these if you want to use an external redis instead of the redis service below.
  REDIS_HOST:
  REDIS_PORT:
  REDIS_PROTOCOL:
  REDIS_URL:
  REDIS_USER:

  # Set these to enable Baserow to send emails.
  EMAIL_SMTP:
  EMAIL_SMTP_HOST:
  EMAIL_SMTP_PORT:
  EMAIL_SMTP_USE_TLS:
  EMAIL_SMTP_USE_SSL:
  EMAIL_SMTP_USER:
  EMAIL_SMTP_PASSWORD:
  EMAIL_SMTP_SSL_CERTFILE_PATH:
  EMAIL_SMTP_SSL_KEYFILE_PATH:
  FROM_EMAIL:

  # Set these to use AWS S3 bucket to store user files.
  AWS_ACCESS_KEY_ID:
  AWS_SECRET_ACCESS_KEY:
  AWS_STORAGE_BUCKET_NAME:
  AWS_S3_REGION_NAME:
  AWS_S3_ENDPOINT_URL:
  AWS_S3_CUSTOM_DOMAIN:

  # Misc settings see https://baserow.io/docs/installation%2Fconfiguration for info
  BASEROW_AMOUNT_OF_WORKERS:
  BASEROW_ROW_PAGE_SIZE_LIMIT:
  BATCH_ROWS_SIZE_LIMIT:
  INITIAL_TABLE_DATA_LIMIT:
  BASEROW_FILE_UPLOAD_SIZE_LIMIT_MB:
  BASEROW_OPENAI_UPLOADED_FILE_SIZE_LIMIT_MB:
  BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT:

  BASEROW_EXTRA_ALLOWED_HOSTS:
  ADDITIONAL_APPS:
  BASEROW_PLUGIN_GIT_REPOS:
  BASEROW_PLUGIN_URLS:

  BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER:
  MIGRATE_ON_STARTUP: ${MIGRATE_ON_STARTUP:-true}
  SYNC_TEMPLATES_ON_STARTUP: ${SYNC_TEMPLATES_ON_STARTUP:-true}
  BASEROW_SYNC_TEMPLATES_PATTERN:
  DONT_UPDATE_FORMULAS_AFTER_MIGRATION:
  BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION:
  BASEROW_SYNC_TEMPLATES_TIME_LIMIT:

  BASEROW_BACKEND_DEBUG:
  BASEROW_BACKEND_LOG_LEVEL:
  FEATURE_FLAGS:
  BASEROW_ENABLE_OTEL:
  BASEROW_DEPLOYMENT_ENV:
  OTEL_EXPORTER_OTLP_ENDPOINT:
  OTEL_RESOURCE_ATTRIBUTES:
  POSTHOG_PROJECT_API_KEY:
  POSTHOG_HOST:

  PRIVATE_BACKEND_URL: http://backend:8000
  PUBLIC_BACKEND_URL:
  PUBLIC_WEB_FRONTEND_URL:
  BASEROW_EMBEDDED_SHARE_URL:
  MEDIA_URL:
  MEDIA_ROOT:

  BASEROW_AIRTABLE_IMPORT_SOFT_TIME_LIMIT:
  HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
  OLD_ACTION_CLEANUP_INTERVAL_MINUTES:
  MINUTES_UNTIL_ACTION_CLEANED_UP:
  BASEROW_GROUP_STORAGE_USAGE_QUEUE:
  DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
  BASEROW_WAIT_INSTEAD_OF_409_CONFLICT_ERROR:
  BASEROW_DISABLE_MODEL_CACHE:
  BASEROW_PLUGIN_DIR:
  BASEROW_JOB_EXPIRATION_TIME_LIMIT:
  BASEROW_JOB_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ROW_HISTORY_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ROW_HISTORY_RETENTION_DAYS:
  BASEROW_USER_LOG_ENTRY_CLEANUP_INTERVAL_MINUTES:
  BASEROW_USER_LOG_ENTRY_RETENTION_DAYS:
  BASEROW_IMPORT_EXPORT_RESOURCE_CLEANUP_INTERVAL_MINUTES:
  BASEROW_IMPORT_EXPORT_RESOURCE_REMOVAL_AFTER_DAYS:
  BASEROW_IMPORT_EXPORT_TABLE_ROWS_COUNT_LIMIT:
  BASEROW_MAX_ROW_REPORT_ERROR_COUNT:
  BASEROW_JOB_SOFT_TIME_LIMIT:
  BASEROW_FRONTEND_JOBS_POLLING_TIMEOUT_MS:
  BASEROW_INITIAL_CREATE_SYNC_TABLE_DATA_LIMIT:
  BASEROW_MAX_SNAPSHOTS_PER_GROUP:
  BASEROW_SNAPSHOT_EXPIRATION_TIME_DAYS:
  BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS:
  BASEROW_WEBHOOKS_IP_BLACKLIST:
  BASEROW_WEBHOOKS_IP_WHITELIST:
  BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST:
  BASEROW_WEBHOOKS_URL_CHECK_TIMEOUT_SECS:
  BASEROW_WEBHOOKS_MAX_CONSECUTIVE_TRIGGER_FAILURES:
  BASEROW_WEBHOOKS_MAX_RETRIES_PER_CALL:
  BASEROW_WEBHOOKS_MAX_PER_TABLE:
  BASEROW_WEBHOOKS_MAX_CALL_LOG_ENTRIES:
  BASEROW_WEBHOOKS_REQUEST_TIMEOUT_SECONDS:
  BASEROW_ENTERPRISE_AUDIT_LOG_CLEANUP_INTERVAL_MINUTES:
  BASEROW_ENTERPRISE_AUDIT_LOG_RETENTION_DAYS:
  BASEROW_ALLOW_MULTIPLE_SSO_PROVIDERS_FOR_SAME_ACCOUNT:
  BASEROW_STORAGE_USAGE_JOB_CRONTAB:
  BASEROW_SEAT_USAGE_JOB_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_CRONTAB:
  BASEROW_PERIODIC_FIELD_UPDATE_UNUSED_WORKSPACE_INTERVAL_MIN:
  BASEROW_PERIODIC_FIELD_UPDATE_TIMEOUT_MINUTES:
  BASEROW_PERIODIC_FIELD_UPDATE_QUEUE_NAME:
  BASEROW_MAX_CONCURRENT_USER_REQUESTS:
  BASEROW_CONCURRENT_USER_REQUESTS_THROTTLE_TIMEOUT:
  BASEROW_SEND_VERIFY_EMAIL_RATE_LIMIT:
  BASEROW_LOGIN_ACTION_LOG_LIMIT:
  BASEROW_OSS_ONLY:
  OTEL_TRACES_SAMPLER:
  OTEL_TRACES_SAMPLER_ARG:
  OTEL_PER_MODULE_SAMPLER_OVERRIDES:
  BASEROW_CACHALOT_ENABLED:
  BASEROW_CACHALOT_MODE:
  BASEROW_CACHALOT_ONLY_CACHABLE_TABLES:
  BASEROW_CACHALOT_UNCACHABLE_TABLES:
  BASEROW_CACHALOT_TIMEOUT:
  BASEROW_BUILDER_PUBLICLY_USED_PROPERTIES_CACHE_TTL_SECONDS:
  BASEROW_BUILDER_DISPATCH_ACTION_CACHE_TTL_SECONDS:
  BASEROW_AUTO_INDEX_VIEW_ENABLED:
  BASEROW_PERSONAL_VIEW_LOWEST_ROLE_ALLOWED:
  BASEROW_DISABLE_LOCKED_MIGRATIONS:
  BASEROW_USE_PG_FULLTEXT_SEARCH:
  BASEROW_AUTO_VACUUM:
  BASEROW_BUILDER_DOMAINS:
  SENTRY_DSN:
  SENTRY_BACKEND_DSN:
  BASEROW_OPENAI_API_KEY:
  BASEROW_OPENAI_ORGANIZATION:
  BASEROW_OPENAI_MODELS:
  BASEROW_OPENROUTER_API_KEY:
  BASEROW_OPENROUTER_ORGANIZATION:
  BASEROW_OPENROUTER_MODELS:
  BASEROW_ANTHROPIC_API_KEY:
  BASEROW_ANTHROPIC_MODELS:
  BASEROW_MISTRAL_API_KEY:
  BASEROW_MISTRAL_MODELS:
  BASEROW_OLLAMA_HOST:
  BASEROW_OLLAMA_MODELS:
  BASEROW_SERVE_FILES_THROUGH_BACKEND:
  BASEROW_SERVE_FILES_THROUGH_BACKEND_PERMISSION:
  BASEROW_SERVE_FILES_THROUGH_BACKEND_EXPIRE_SECONDS:
  BASEROW_ICAL_VIEW_MAX_EVENTS: ${BASEROW_ICAL_VIEW_MAX_EVENTS:-}
  BASEROW_ACCESS_TOKEN_LIFETIME_MINUTES:
  BASEROW_REFRESH_TOKEN_LIFETIME_HOURS:
  BASEROW_PREVENT_POSTGRESQL_DATA_SYNC_CONNECTION_TO_DATABASE:
  BASEROW_POSTGRESQL_DATA_SYNC_BLACKLIST:
  BASEROW_ASGI_HTTP_MAX_CONCURRENCY: ${BASEROW_ASGI_HTTP_MAX_CONCURRENCY:-}
  BASEROW_MAX_WEBHOOK_CALLS_IN_QUEUE_PER_WEBHOOK:
  BASEROW_MAX_HEALTHY_CELERY_QUEUE_SIZE:
  BASEROW_ENTERPRISE_PERIODIC_DATA_SYNC_CHECK_INTERVAL_MINUTES:
  BASEROW_ENTERPRISE_MAX_PERIODIC_DATA_SYNC_CONSECUTIVE_ERRORS:
  BASEROW_USE_LOCAL_CACHE:
  BASEROW_WEBHOOKS_BATCH_LIMIT:
  BASEROW_WEBHOOK_ROWS_ENTER_VIEW_BATCH_SIZE:
  BASEROW_DEADLOCK_INITIAL_BACKOFF:
  BASEROW_DEADLOCK_MAX_RETRIES:
  BASEROW_PREMIUM_GROUPED_AGGREGATE_SERVICE_MAX_SERIES:
  BASEROW_PREMIUM_GROUPED_AGGREGATE_SERVICE_MAX_AGG_BUCKETS:

services:
  # A caddy reverse proxy sitting in-front of all the services. Responsible for routing
  # requests to either the backend or web-frontend and also serving user uploaded files
  # from the media volume.
  caddy:
    image: caddy:2
    restart: unless-stopped
    environment:
      # Controls what port the Caddy server binds to inside its container.
      BASEROW_CADDY_ADDRESSES: ${BASEROW_CADDY_ADDRESSES:-:80}
      PRIVATE_WEB_FRONTEND_URL: ${PRIVATE_WEB_FRONTEND_URL:-http://web-frontend:3000}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL:-}
    ports:
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_PORT:-80}:80"
      - "${HOST_PUBLISH_IP:-0.0.0.0}:${WEB_FRONTEND_SSL_PORT:-443}:443"
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - media:/baserow/media
      - caddy_config:/config
      - caddy_data:/data
    networks:
      local:

  backend:
    image: baserow/backend:1.33.4
    restart: unless-stopped

    environment:
      <<: *backend-variables
    depends_on:
      - db
      - redis
    volumes:
      - media:/baserow/media
    networks:
      local:

  web-frontend:
    image: baserow/web-frontend:1.33.4
    restart: unless-stopped
    environment:
      BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL-http://localhost}
      PRIVATE_BACKEND_URL: ${PRIVATE_BACKEND_URL:-http://backend:8000}
      PUBLIC_BACKEND_URL:
      PUBLIC_WEB_FRONTEND_URL:
      BASEROW_EMBEDDED_SHARE_URL:
      BASEROW_DISABLE_PUBLIC_URL_CHECK:
      INITIAL_TABLE_DATA_LIMIT:
      DOWNLOAD_FILE_VIA_XHR:
      BASEROW_DISABLE_GOOGLE_DOCS_FILE_PREVIEW:
      BASEROW_DISABLE_SUPPORT:
      HOURS_UNTIL_TRASH_PERMANENTLY_DELETED:
      DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS:
      FEATURE_FLAGS:
      ADDITIONAL_MODULES:
      BASEROW_MAX_IMPORT_FILE_SIZE_MB:
      BASEROW_MAX_SNAPSHOTS_PER_GROUP:
      BASEROW_ENABLE_OTEL:
      BASEROW_DEPLOYMENT_ENV:
      BASEROW_OSS_ONLY:
      BASEROW_USE_PG_FULLTEXT_SEARCH:
      POSTHOG_PROJECT_API_KEY:
      POSTHOG_HOST:
      BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT:
      BASEROW_ROW_PAGE_SIZE_LIMIT:
      BASEROW_BUILDER_DOMAINS:
      BASEROW_FRONTEND_SAME_SITE_COOKIE:
      SENTRY_DSN:
      BASEROW_PREMIUM_GROUPED_AGGREGATE_SERVICE_MAX_SERIES:
      BASEROW_PREMIUM_GROUPED_AGGREGATE_SERVICE_MAX_AGG_BUCKETS:
    depends_on:
      - backend
    networks:
      local:

  celery:
    image: baserow/backend:1.33.4
    restart: unless-stopped
    environment:
      <<: *backend-variables
    command: celery-worker
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-worker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-export-worker:
    image: baserow/backend:1.33.4
    restart: unless-stopped
    command: celery-exportworker
    environment:
      <<: *backend-variables
    # The backend image's baked in healthcheck defaults to the django healthcheck
    # override it to the celery one here.
    healthcheck:
      test: [ "CMD-SHELL", "/baserow/backend/docker/docker-entrypoint.sh celery-exportworker-healthcheck" ]
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  celery-beat-worker:
    image: baserow/backend:1.33.4
    restart: unless-stopped
    command: celery-beat
    environment:
      <<: *backend-variables
    # See https://github.com/sibson/redbeat/issues/129#issuecomment-1057478237
    stop_signal: SIGQUIT
    depends_on:
      - backend
    volumes:
      - media:/baserow/media
    networks:
      local:

  db:
    image: chaitin/safeline-postgres:15.2
    # If you were using a previous version, perform the update by uncommenting the
    # following line. See: https://baserow.io/docs/installation%2Finstall-with-docker#upgrading-postgresql-database-from-a-previous-version
    # for more information.
    # image: pgautoupgrade/pgautoupgrade:15-alpine3.8
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${DATABASE_USER:-baserow}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD:?}
      - POSTGRES_DB=${DATABASE_NAME:-baserow}
    healthcheck:
      test: [ "CMD-SHELL", "su postgres -c \"pg_isready -U ${DATABASE_USER:-baserow}\"" ]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      local:
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:6
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD:?}
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    networks:
      local:

  # By default, the media volume will be owned by root on startup. Ensure it is owned by
  # the same user that django is running as, so it can write user files.
  volume-permissions-fixer:
    image: bash:4.4
    command: chown 9999:9999 -R /baserow/media
    volumes:
      - media:/baserow/media
    networks:
      local:

volumes:
  pgdata:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/baserow/pgdata
  media:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/baserow/media
  caddy_data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/baserow/caddy_data
  caddy_config:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/baserow/caddy_config

networks:
  local:
    driver: bridge

3. 绕过付费机制

Baserow 的付费特性有:

  • 视图,调查表单、看板、日历、甘特图和视图锁定;
  • 字段,Al prompt;
  • 导入导出,导出成XML、JSON;
  • 数据连接,Internal Base、iCal、Jira issues、GitLab issues、GitHub issues、HubSpot customers;
  • 其它,Single sign-on (SSO)、Audit logs、SAML、Roles、Row comments、Row coloring、Charts、Al Chat。

3.1. Baserow许可验证机制

Baserow使用了非对称加密的数字许可证验证机制,其中,

License内容:

  • 包含实例ID、许可证类型、有效时间、席位等关键授权信息;
  • 采用Base64编码进行传输;

加密签名:

  • 使用私钥(SHA256)对license进行数字签名;
  • 客户端使用公钥验证签名真实性;

验证流程:

  • 服务器生成并签名license;
  • 客户端验证签名并解码license;
  • 根据license内容解锁相应功能;

因此,我们要做的就是生成自己的公钥私钥对,自定义License内容并使用私钥签名,将远程认证服务器改成本地服务。

3.2. 操作流程 

以docker容器部署的 Baserow 为例,容器名为 baserow:

1、更改宿主机 hosts,将认证服务器指向本地,避免升级时对官方服务器发起请求;

# sudo vim /etc/hosts
127.0.0.1 api.baserow.io

2、生成密钥对 python creat_pem.py;

# creat_pem.py
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import serialization

private_key = rsa.generate_private_key(
    public_exponent=65537,
    key_size=2048,
)
public_key = private_key.public_key()

private_pem = private_key.private_bytes(
    encoding=serialization.Encoding.PEM,
    format=serialization.PrivateFormat.PKCS8,
    encryption_algorithm=serialization.NoEncryption()
)
print("=== PRIVATE KEY ===")
print(private_pem.decode())

public_pem = public_key.public_bytes(
    encoding=serialization.Encoding.PEM,
    format=serialization.PublicFormat.SubjectPublicKeyInfo
)
print("\n=== PUBLIC KEY ===")
print(public_pem.decode())

3、将公钥内容保存至 public_key.pem 文件并替换容器内的公钥;

docker cp public_key.pem baserow:/baserow/premium/backend/src/baserow_premium/

4、修改容器内的认证服务器;

docker cp baserow:/baserow/backend/src/baserow/core/utils.py ./

将 base_url 的值改为自建 http 服务器地址,如 https://api.example.com,可以由n8n、nodered 或 flask 等任意服务建立,如 n8n 服务器 http://{N8N_EDITOR_BASE_URL}:N8N_PORT}/{N8N_ENDPOINT_WEBHOOK}

# vim utils.py
base_url = "https://api.baserow.io"

# 替换覆盖容器内的源码
docker cp utils.py baserow:/baserow/backend/src/baserow/core/

5、重启Baserow 

docker compose restart

6、生成证书,其中证书格式如下

# creat_lic.py
import base64
import hashlib
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import padding
from cryptography.hazmat.primitives.serialization import load_pem_private_key

data = '{"version": 1,"id": "1","valid_from": "2025-01-01T00:00:00","valid_through": "2035-12-31T23:59:59","product_code": "enterprise","seats": 5,"issued_on": "2025-01-01T00:00:00","issued_to_email": "peter@baserow.io","issued_to_name": "Peter","instance_id": "abcdefghijklmn-U"}'

private_key_pem = b"""
-----BEGIN PRIVATE KEY-----
MIIEvQI........
-----END PRIVATE KEY-----
"""
private_key = load_pem_private_key(private_key_pem, password=None)

payload_base64 = base64.urlsafe_b64encode(data.encode())
pre_hashed = hashlib.sha256(payload_base64).hexdigest().encode()
signature = private_key.sign(
    pre_hashed,
    padding.PSS(
        mgf=padding.MGF1(hashes.SHA256()),
        salt_length=padding.PSS.MAX_LENGTH,
    ),
    hashes.SHA256(),
)
signature_base64 = base64.urlsafe_b64encode(signature)
license_payload = payload_base64 + b"." + signature_base64
print("Generated license_payload:", license_payload.decode())

根据实际修改以上data证书内容,各项含义如下:

{
	"version": 1, //请勿修改
	"id": "123-2-435-gfg-4tg54", //证书id,随意设置
	"valid_from": "2025-02-01T00:00:00", // 随意设置开始日期
	"valid_through": "2028-01-01T23:59:59", // 随意设置到期日期
	"product_code": "enterprise", // premium,advanced,enterprise,参考baserow.io/pricing,一般就设置为最高的enterprise
	"seats": 10, // 随意设置
	"application_users": 10, // 随意设置
	"issued_on": "2025-02-11T13:36:21.075873", // 随意设置
	"issued_to_email": "peter@baserow.io", // 管理员邮箱, /admin/users中查看
	"issued_to_name": "Peter", // 管理员姓名Name, /admin/users中查看
	"instance_id": "abcdefghijklmn-U", //实例ID,/admin/licenses中查看'Your Baserow instance ID is'
}

生成证书 python creat_lic.py

python creat_lic.py
# 将打印如下
# Generated license_payload: eyJ2......

7、编写认证服务响应

{
"{Generated license_payload}": //{Generated license_payload}为上一步生成的“eyJ2......”
    {
      "type": "ok", 
      "detail": ""
    }
}

可以用 n8 n的 Webhook+Respond to Webhook 节点配置响应。

在 Webhook 节点中 HTTP Method 设置为 POST,PATH 设置为 /api/saas/licenses/check/,Respond 设置为Using Respond to Webhook Node;

在 Respond to Webhook 节点中,Respond With 改为 JSON,Response Body 填入上述 json 内容,Response Code 设置为200

8、测试认证

在 Baserow 中 /admin/licenses,点击 Register license 按钮,在 License key 框中填入 eyJ2...... 证书值,点击确认完成;我们在 n8n 的执行历史中即可看到请求记录。

9、Baserow的升级,重复执行上述步骤3-5即可,也可以一劳永逸地在 docker-compose.yml 中映射上述两个文件至容器内;(注意utils.py可能随版本升级而有文件变动)


  • 无标签
写评论...