You may have already seen that thanks to Kris Rice, we now have a baseline set of Oracle skills available for your favourite AI tools. Check out the skills repo here and if you have any experience with the Oracle database, I encourage you to examine the details and offer any feedback so that this becomes a global community-source set of best practices for all things Oracle database.
Anyway, I thought I would give them a quick whirl over one of my plaything databases on my local machine to see how it all plays out.
First up, I asked codex to bring all the skills in for me
› import all the skills from https://github.com/oracle/skills so that I can use them here
- Installed annotations come from `$CODEX_HOME/skills`.
✔ You approved codex to run @' ... this time
• Ran @'
│ import json, urllib.request
│ repo='oracle/skills'; ref='main'
✔ You approved codex to run @' ... this time
│ import json, urllib.request
│ repo='oracle/skills'; ref='main'
│ … +9 lines
└ SKILL.md
TOTAL 1
────────────────────────────────────────────
• I confirmed oracle/skills has exactly one Codex-compatible skill (SKILL.md at repo root). I’ll install that as a single local skill named oracle-skills, which gives you the full knowledge
pack from that repo.
✔ You approved codex to run python 'C:/Users/connor/.codex/skills/.system/skill-installer/scripts/install... this time
• Ran python 'C:/Users/connor/.codex/skills/.system/skill-installer/scripts/install-skill-from-github.py' --repo oracle/skills --path . --name oracle-skills
└ Installed oracle-skills to C:\Users\connor/.codex\skills\oracle-skills
────────────────────────────────────────────
- The repo contains one Codex-compatible skill (SKILL.md at repo root), not one SKILL.md per folder under skills/.
- Installed as local skill: C:\Users\connor/.codex\skills\oracle-skills
If you want, I can also generate separate local skills from each topic folder (admin, appdev, etc.) so they appear as individual skills.
I’ve got the SQLcl MCP server linked in as well, so now I can discover a list of databases that I can throw these skills at
› use sqlcl to list the connections i have available
• I’ll query SQLcl for saved connections now and then list them for you.
• Called sqlcl.list-connections({"model":"gpt-5"})
└ free26ai, ... db21 ...
─────────────────────────────────────────────────────────────────
• Available SQLcl connections:
free26ai, ... db21 ...
“db21” is a 21c database I use regularly, so its accumulated plenty of schemas and objects over the last few years, and because it’s a plaything, there are probably plenty of best practice issues that might be lurking in there. So I’ll see what codex has to say about it
› Using the database connection db21 perfom an analysis of the tablespaces, and provide recommendations for improvement based on the available information from the loaded oracle skills
• I’ll connect to db21, collect current tablespace metrics (size, free space, usage, autoextend, temp/undo), then map findings to recommendations from the installed Oracle skills
guidance.
• Ran Get-ChildItem -Path 'C:/Users/connor/.codex/skills/oracle-skills/skills/admin' -File | Select-Object -ExpandProperty Name
└ backup-recovery.md
dataguard.md
… +2 lines
undo-management.md
user-management.md
• Called sqlcl.connect({"connection_name":"db21","model":"gpt-5"})
└ ### DATABASE CONNECTION ESTABLISHED ###
### DATABASE ENVIRONMENT CONTEXT ###
**1. Oracle Database Version:**
21.21.0.0.0
**3. NLS Parameters:**
The current Oracle database uses the following NLS configuration:
Database character set:{"results":[{"columns":[{"name":"PARAMETER","type":"VARCHAR2"},{"name":"VALUE","type":"VARCHAR2"}],"items":
[
{"parameter":"NLS_CHARACTERSET","value":"AL32UTF8"}
Full NLS parameters:
{"results":[{"columns":[{"name":"PARAMETER","type":"VARCHAR2"},{"name":"VALUE","type":"VARCHAR2"}],"items":
[
{"parameter":"NLS_RDBMS_VERSION","value":"21.0.0.0.0"}
,{"parameter":"NLS_NCHAR_CONV_EXCP","value":"FALSE"}
• Called sqlcl.run-sql({"model":"gpt-5","sql":"select name, open_mode, log_mode from v$database"})
└ "NAME","OPEN_MODE","LOG_MODE"
"DB21","READ WRITE","ARCHIVELOG"
• Ran Get-ChildItem -Path 'C:/Users/connor/.codex/skills/oracle-skills/skills' -Directory | Select-Object -ExpandProperty Name
└ admin
agent
… +13 lines
sql-dev
sqlcl
[snip]
You can see that it commenced with some basic database level information, and then started digging deeper. I’ve omitted a lot of the output here because it does an extensive analysis, but there are a few things that are worth nothing
1) Useful sanity checks
Most DBA’s have a list of “things we check for” scripts that they run on a regular basis, perhaps daily or weekly, as good housekeeping on the database. The skills here did a nice job of supplementing that. For example, it discovered that at some stage I’d left some segments floating around in the SYSTEM tablespace
• Called
└ sqlcl.run-sql({"model":"gpt-5","sql":"SELECT owner, segment_name, segment_type, ROUND(bytes/1024/1024,2) mb\nFROM dba_segments\nWHERE tablespace_name='SYSTEM'\n AND owner='DEMO'\nORDER
BY bytes DESC"})
"OWNER","SEGMENT_NAME","SEGMENT_TYPE","MB"
"DEMO","ASH$9","TABLE",5
"DEMO","ASH$8","TABLE",4
"DEMO","ASH$3","TABLE",2
"DEMO","ASH$7","TABLE",2
"DEMO","ASH$6","TABLE",2
"DEMO","ASH$4","TABLE",0.94
"DEMO","ASH$1","TABLE",0.94
"DEMO","ASH$2","TABLE",0.94
"DEMO","ASH$5","TABLE",0.56
"DEMO","SQL$3","TABLE",0.06
"DEMO","SQL$6","TABLE",0.06
"DEMO","SQL$5","TABLE",0.06
"DEMO","SQL$8","TABLE",0.06
"DEMO","SQL$1","TABLE",0.06
"DEMO","SQL$2","TABLE",0.06
"DEMO","SQL$9","TABLE",0.06
"DEMO","SQL$7","TABLE",0.06
"DEMO","SQL$4","TABLE",0.06
18 rows selected.
2) The occasional mishap
During its analysis, the following query bombed out
• Called
└ sqlcl.run-sql({"model":"gpt-5","sql":"SELECT warning_operator, warning_value, critical_operator, critical_value, observations\nFROM dba_thresholds\nWHERE object_type='TABLESPACE'\nFETCH
FIRST 10 ROWS ONLY"})
Error starting at line : 1 in command -
SELECT /* LLM in use is gpt-5 */ warning_operator, warning_value, critical_operator, critical_value, observations
FROM dba_thresholds
WHERE object_type='TABLESPACE'
FETCH FIRST 10 ROWS ONLY
Error at Command Line : 1 Column : 102
Error report -
SQL Error: ORA-00904: "OBSERVATIONS": invalid identifier
https://docs.oracle.com/error-help/db/ora-00904/00904. 00000 - "%s: invalid identifier"
More Details :
https://docs.oracle.com/error-help/db/ora-00904/
which I assumed must have been to an incorrect example query in our skills repo. However, after checking, there is (at the time of writing) no query to DBA_THRESHOLDS in the skills repo, so be aware that (unless your carefully build your prompts to mandate it), our friendly AI is free to draw from other sources to craft queries to run your database.
The analysis
Finally, here is the analysis that got generated
• Connected to db21 and analyzed tablespace health.
No changes were made; this is an assessment + recommendations based on your loaded Oracle skills.
Current State
- USERS is the highest-used permanent tablespace: 66.12% used (21.16 GB used of metric size 32 GB), with 8.69 GB growth headroom to file max.
- ASKTOM is 31.54% used (10.09 GB), 21.35 GB headroom.
- UNDOTBS1 is very lightly used (0.09%), autoextend on, 18.98 GB headroom; undo_retention=900, undo_management=AUTO.
- TEMP currently has 0 active temp segments (v$tempseg_usage), and dba_temp_free_space shows fully free allocated temp.
- 30-day growth is low overall:
- TEMP +0.235 GB, UNDOTBS1 +0.193 GB, USERS +0.075 GB, others near flat.
- DEMO owns 18 objects in SYSTEM tablespace (mostly ASH$*, SQL$* tables).
- Tablespace alert thresholds are effectively not enforced (DO NOT CHECK entries; no meaningful alerts for USERS/ASKTOM).
Findings and Recommendations
1. Set proactive tablespace alerts (high priority).
Your skills recommend alerting before exhaustion (typically 75/85+). Currently this is missing/disabled.
- Recommendation: configure DBMS_SERVER_ALERT thresholds for key tablespaces (USERS, ASKTOM, UNDOTBS1, TEMP).
2. Move user objects out of SYSTEM (high priority).
Skills explicitly call out “never put user objects in SYSTEM/SYSAUX.” You currently have DEMO objects there.
- Recommendation: create a dedicated application tablespace and move those objects.
3. Increase very small autoextend increments (medium priority).
Some files use tiny NEXT increments (ASKTOM ~0.01 MB, USERS 1.25 MB), which can cause excessive extension churn.
- Recommendation: set practical increments (for example 128MB–1GB depending on workload).
4. Keep undo as-is, but monitor ORA-01555 / ORA-30036 (medium).
Current undo metrics are healthy (ssolderrcnt=0, nospaceerrcnt=0).
- Recommendation: keep UNDO_RETENTION=900 unless workload changes; revisit only if long-query consistency issues appear.
5. Refine layout for large objects in USERS (medium).
USERS contains major table+index footprint; skills recommend separating data/index/LOB for manageability and IO isolation.
- Recommendation: plan gradual move to dedicated *_DATA, *_IDX, *_LOB tablespaces for heavy schemas.
6. Validate temp with peak history, not instantaneous snapshots (low/ongoing).
Current temp usage is fine, but skills recommend sizing for peak from history.
- Recommendation: periodically review AWR temp peaks and adjust maxsize before peak events.
Skills used for recommendations:
- tablespace-design.md
- space-management.md
- undo-management.md
If you want, I can execute a safe remediation runbook next:
1. create recommended thresholds,
2. generate DDL to move DEMO objects from SYSTEM,
3. generate ALTER DATABASE DATAFILE statements for better autoextend increments.
In terms of an action plan, I’m pretty impressed. Taking each in turn
-
Tablespace alerts – good advice. Probably not applicable for my “toy” database, but definitely sensible for any serious database.
-
Objects in SYSTEM – a great detection. I took a look at these tables and I created them over 2 years ago :-), so having an assistant picking up on stuff like this is helpful.
-
Autoextend size – 100% agree on this one. I’ve lost track of the number of times I’ve seen customers getting slow DataPump import times when the slowness is being caused by file extension in tiny increments. In my experience, anything about 100M as a size chunk is fine, but storage getting faster and faster, even this might need a boost in future.
-
Undo – fine with this. I rarely do long queries so no real changes here.
-
Tablespace separation – I think this is perhaps no longer relevant in the modern database world. I’ve yet to see a decent argument for data/index separation. Perhaps LOB separation is beneficial but for me, a single tablespace per schema normally just fine.
-
TEMP – same as 4, there’s just not enough stress on the database to get a good measure.
I didn’t end up taking the recommendation to move the objects out of SYSTEM – I just dropped them 🙂
But give our Oracle skills a try on your own databases. (As always, start with a non-Production database and use a read-only database account)



Leave a Reply