By default, Metabase allows you to query relational databases and a few other sources as well. But it does not play well with external sources, such as API or others that supply JSON. However, there is a way to teach it to fetch APIs. There were many efforts to implement it natively (including a HTTP driver by yours truly), but they are outdated, not working, or not feature-complete enough to be used in production.

However, there is a way to enable queries from the Postgres database if you happen to use that. You can even use the internal Metabase database for this purpose if you opt for Postgres as Metabase's storage solution.
The solution involves adding a HTTP extension in Postgres, which allows HTTP API URL calls to be made directly from SQL queries.
Installing it in your Metabase instance
This assumes you have Metabase deployed to your own server, with DollarDeploy or manually, with Postgres database enabled.
Install extension package
Login into the VPS you have and run the following command. This will install the http extension.
docker exec -ti metabase-postgres-1 sh -c "apt update && apt install postgresql-17-http"
Installs Postgres http extension
Add database connection to Metabase
Default public schema hosts metabase tables, and we will not want to touch these. You can use set other schema, for example apidata.
Connection details
- Host: postgres
- Database name: metabase
- Username: metabase
- Password: metabase
- Schemas: only these: apidata

Enabling extension
For using it, you need to go to the Metabase, and run once:
CREATE EXTENSION IF NOT EXISTS http

You can discard your SQL query afterwards.
Testing everything works
Write the following query:
WITH ipjson as (
WITH http AS (
SELECT * FROM
http_get('https://ipinfo.io/json')
)
SELECT result.*
FROM
http,
json_to_record(http.content::json) AS result(ip text, hostname TEXT, city TEXT, country text)
)
SELECT * FROM ipjson
This will show your server's IP address information, like this:
ip | hostname | city | country |
---|---|---|---|
1.1.1.1 | static.1.1.1.1.clients.your-server.de | Helsink | Finland |
What to do next?
There is a lot of options available (see documentation here)
- Call public APIs
- Call your own API
- Create your API endpoints with N8N (we will explore this in the next article)
Tips and tricks
You can create one single query and both insert and update data in one single go. This way you can have both remote data refreshed into local database but also create complex queries with it:
Create table for JSON snapshots
CREATE TABLE snapshots (
id integer primary key generated always as identity,
data jsonb,
date timestamp,
type text not null,
updated_at timestamp
)
We can write to this table different chunks of daily updated data and also have a visualization of changes to it.
INSERT INTO snapshots (type, date, data, updated_at)
SELECT
'stripe',
to_timestamp(api.content::json->>'timestamp'::text, 'YYYY-MM-DD'), api.content::json,
now()
FROM
http_get('==API URL==') api
WHERE NOT EXISTS (
SELECT *
FROM snapshots s
WHERE
s.type = 'stripe' AND
s.date = to_timestamp(api.content::json->>'timestamp'::text, 'YYYY-MM-DD')
)
After that, we can just query snapshots table as well:
SELECT (data->>'amount'::text)::numeric/100 FROM snapshots
WHERE type = 'stripe'
Combining UPDATE and INSERT in one query:
WITH updated AS (
UPDATE snapshots
SET
data = api.content::json, updated_at = now()
FROM
http_get('==API URL==') api
WHERE
type = 'stripe' AND
date = to_timestamp(api.content::json->>'timestamp'::text, 'YYYY-MM-DD')
RETURNING 'stripe' type, data
), added AS (
INSERT INTO snapshots (type, date, data, updated_at)
SELECT 'stripe', to_timestamp(api.content::json->>'timestamp'::text, 'YYYY-MM-DD'), api.content::json, now()
FROM http_get('==API URL==') api
WHERE NOT EXISTS (
SELECT *
FROM snapshots s
WHERE s.type = 'stripe' and s.date = to_timestamp(api.content::json->>'timestamp'::text, 'YYYY-MM-DD')
) RETURNING 'stripe' type, data
)
SELECT * FROM updated
UNION ALL
SELECT * FROM added
Sneak peek, getting the BTC price over the days:
