Dataset Viewer
Auto-converted to Parquet
instance_id
stringlengths
11
37
selected_database
stringclasses
22 values
query
stringlengths
36
847
normal_query
stringlengths
41
892
preprocess_sql
listlengths
0
2
clean_up_sqls
listlengths
0
2
sol_sql
listlengths
0
0
external_knowledge
listlengths
0
0
test_cases
listlengths
0
0
category
stringclasses
2 values
high_level
bool
2 classes
conditions
dict
solar_panel_1
solar_panel
How likely is the 'solar plant west davidport' (matching the name regardless of case) to be down when we need it? Give me its system unavailability score, just the number, to four decimal points.
For the solar plant labeled 'solar plant west davidport' (case-insensitive match), calculate its system unavailability. Display the result as a scalar value, rounded to 4 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 4, "distinct": false, "order": false }
solar_panel_2
solar_panel
I need to know the financial hit from plants with recurring warranty issuesβ€”the ones whose warranty status is 'claimed' and have had three or more claims logged against them. Can you figure out the total lifetime revenue loss for them, but only count ones where we know their go-live date and degradation? Just assume they all have 15 years left, produce 500,000 kwh a year, and we sell the power at 12 cents. Give me the grand total.
Calculate the total projected lifetime revenue loss for all plants that are flagged for Warranty Claim Risk. For this calculation, only include plants where the commissioning date and cumulative degradation are known. For the projection, assume a remaining lifetime of 15 years, an average annual energy production of 500,000 kwh, and an energy price of $0.12/kwh. Present the total loss as a single value.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_3
solar_panel
If we could magically cool the panels for snapshot pv945724 down to 25 degrees celsius, what would its power output be? Give me the temperature-corrected performance in watts, with two decimal points.
For the snapshot 'pv945724', calculate the temperature-corrected performance. Use a reference temperature of 25Β°c. Display the result in watts, rounded to two decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
solar_panel_4
solar_panel
For the maintenance event pv937101, did the repair cost more than the revenue we lost during the downtime? To figure that out, you'll have to clean up the revenue loss text by stripping out any '$' or ',' characters. Tell me the maintenance cost to revenue impact ratio, just the number, rounded to two decimals.
What is the maintenance cost to revenue impact ratio for the snapshot 'pv937101'? The calculation requires cleaning the revenue loss text by removing dollar signs and commas to convert it to a numeric value. Calculate it and return a single numeric value rounded to 2 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
solar_panel_5
solar_panel
How many of our plants are real lemons, both losing more than a quarter of their potential power and being offline for more than one day out of every twenty? Make sure you only use records that have all the numbers needed for the math. Just give me the total count.
What is the total count of plants that are classified as both an underperforming asset, meaning its performance ratio is less than three-quarters, and a chronic downtime asset, meaning its availability is below nineteen-twentieths? Only include snapshots where all data necessary for the calculations is available and valid. Return a single integer.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_6
solar_panel
Using the latest data for each plant, find the one that costs the most to run for its size, and tell me how much power it loses internally. I need the system power loss ratio for whichever plant has the biggest operational expenditure index. Give me the number to 4 decimal places, and only consider plants and snapshots with all the necessary and valid data to make the calculation crash-proof.
For the plant with the highest operational expenditure index based on its most recent snapshot, what is its system power loss ratio, presented to 4 decimal places? Only plants with a known, non-zero power capacity and snapshots with known power values should be considered, and the logic must prevent division-by-zero errors.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 4, "distinct": false, "order": true }
solar_panel_7
solar_panel
When our panel busbars are as corroded as they can get, how much does the quality drop? Calculate the average fill factor degradation for all panels in the worst category for corrosion (regardless of case), but only use data where we have both a before and after fill factor. Give me the result to 3 decimal places.
What is the average fill factor degradation for panels where the busbar corrosion has reached the highest level of severity (case-insensitive)? Only include snapshots where both initial and current fill factors are known. Display the result to 3 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": false }
solar_panel_8
solar_panel
When a plant with hjt panels breaks, what's the average cost to fix it? Calculate the mean repair cost for those plants (matching 'hjt' regardless of case), assuming they've been running for two years straight and have a valid, positive mtbf record. Give me the final number, rounded to a whole dollar.
Determine the mean repair cost for plants using the 'hjt' panel type (case-insensitive), assuming a total operational time of 2 years (17520 hours). Only include snapshots with a known and positive mtbf for the calculation. Provide the result rounded to the nearest dollar.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 0, "distinct": false, "order": false }
solar_panel_9
solar_panel
When our electrical systems fail, how much money do we lose? Add up all the revenue loss from every incident with an 'electrical integrity failure', making sure to strip the dollar signs and commas from the text to get the total.
What is the total revenue loss for snapshots where there is an electrical integrity failure? To perform the sum, the revenue loss text must be cleaned by removing dollar signs and commas. Sum up the cleaned revenue loss for these records.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_10
solar_panel
After accounting for all the internal power drains, what's the actual juice each plant is sending to the grid right now? Only using snapshots where we know both the power loss and current output, and their combined total isn't zero, give me a list of plant names and their latest effective power output, rounded to two decimal places, with the most powerful plant at the top.
For each site, calculate the effective power output using the most recent snapshot. Only include snapshots where both power loss and current power output are known, and their sum is not zero to prevent calculation errors. Display the site label and the calculated power in a table, sorted by the effective power in descending order. Show the result to 2 decimal places.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
solar_panel_11
solar_panel
For the plants that are aging terriblyβ€”meaning their performance drops by more than 0.5% a yearβ€”how long does it typically take to fix them? I need the average mean-time-to-repair for these 'accelerated aging assets'. The age calculation needs to be safe for new plants. Give me the answer in hours, rounded to two decimal places.
Find the average mean time to repair for all plants classified as accelerated aging assets, defined as those with an Annual Degradation Rate greater than 0.5%. The calculation for the degradation rate must handle cases where the plant's age is zero. Round to 2 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
solar_panel_12
solar_panel
How many times have our panels gotten so dirty that they're losing more than three-twentieths of their potential energy? Just give me the total count.
Count the number of snapshots where the power loss from soiling means that for every 200 watts of potential power, more than 30 watts are lost. Return a single integer value.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_13
solar_panel
Which of our plants are a recurring headache for warranty claims, with more than just a couple of filings? I need a list of sites whose status is 'claimed' (regardless of case). Show their names and how many claims they've had, from most to least.
List all plants where the number of warranty claims exceeds the typical initial one or two filings, and their warranty status is 'claimed' (case-insensitive). Show the site label and the number of warranty claims. Sort by the number of claims in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
solar_panel_14
solar_panel
Among our plants in the toughest, highest-risk locations, what's the worst we've seen dirt and grime impact performance? I need the highest soiling loss index from any site that's in that top risk category. Give me the percentage.
What is the highest soiling loss index recorded for a plant that is located in one of our designated top-tier environmental risk zones (case-insensitive)? Return the value as a percentage.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_15
solar_panel
Let's get a financial forecast for our worst panels, the ones that degrade so fast they'll lose over 14% of their power in 20 years. What's the total projected revenue loss over their remaining 15-year lifespan? Base the calculation on a standard 400,000 mwh annual output and a sale price of $50 per mwh.
What is the total lifetime revenue loss projection for all plants using panel models that are projected to lose more than 14% of their output over a 20-year lifespan? Assume an average annual energy production of 400,000 mwh, an energy price of $50/mwh, and a remaining lifetime of 15 years for all plants.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_16
solar_panel
How much are the different types of panels losing their voltage punch over time? I need you to group by the panel technology, making sure to ignore case, and then figure out the average voltage degradation factor for each. But hey, only use data where we actually have a valid 'before' and 'after' voltage to compare, and make sure the starting voltage isn't zero. List the panel types and their average voltage loss, with the worst ones first.
For each distinct panel model type, calculate the average voltage degradation factor. This calculation should only use snapshots that contain all the necessary voltage data and where the initial voltage reading is a positive number. The panel type should be converted to lowercase before grouping. Display the panel kind and the average degradation factor, sorted by the factor in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": true, "order": true }
solar_panel_17
solar_panel
For the machines that are down more than one day in a 20-day period, what's the average price tag on a single repair? To calculate the mean repair cost, you'll need to figure out how long each machine has been running. Only use data where the mtbf and service time are positive.
What is the average mean repair cost for assets that are offline more than 5% of the time? The calculation requires the total time in service, which must be derived from the snapshot and go-live dates, and only include snapshots where mtbf and total hours are positive.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_18
solar_panel
How many of our plants have a major electrical issue right now? I'm talking about situations where the grounding is shot or the bypass diodes are not running in their normal state. Just give me a count of the unique plants with these problems, and don't worry about the case of the status text.
Count the number of distinct plants where the electrical integrity is compromised, indicated by either a complete failure of the grounding system or a bypass diode status that is anything other than nominal (checks performed case-insensitively).
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": true, "order": false }
solar_panel_19
solar_panel
After accounting for all the power being lost inside the system, what was the actual usable power output for snapshot 'pv945724'? Give me the final number in watts.
What is the effective power output for snapshot 'pv945724'? Calculate it and return the value in watts.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_20
solar_panel
For the panels specifically made by longi (regardless of case), how much has their current output dropped on average? To get a good average, please only use records where you have a valid, positive starting current to compare against. Calculate the mean current degradation factor across all of them.
What is the average current degradation factor for all panel models from the manufacturer 'longi' (case-insensitive)? For an accurate average, include only snapshots that have a valid, positive initial current reading to compare against the current reading.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_1
solar_panel
Let's make a special table for problems that need immediate attention, call it `high_risk_alerts`. It needs to store the snapshot id, the alert status, both maintenance and replacement priorities, and when it happened. After creating it, fill it with any alert that's so serious we'd need to send our top people out or order a new part right away. Make sure to find these alerts regardless of case. Also, make sure the snapshot id links back to the main plant record table.
Create a new table `high_risk_alerts` with columns for the snapshot key, alert state, maintenance priority, replacement priority, and the timestamp of the snapshot. Then, populate it by inserting records for any issue that would require either dispatching a senior engineer or ordering a replacement part before the end of the day (checks must be case-insensitive). Add a foreign key constraint on the snapshot key referencing `plant_record`.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_2
solar_panel
I need a handy summary of how our plants are doing right now. Can you create a view called `v_plant_performance_overview`? It should show the plant's name, when the data was taken, how much power it was making, how much sunlight was hitting it, and the cell temperature. Make sure it only shows the very latest data we have for each plant.
Create a view named `v_plant_performance_overview`. This view should join data from the `plants`, `electrical_performance`, and `environmental_conditions` tables. It must display the site label, snapshot timestamp, power output, plane-of-array irradiance, and cell temperature for the most recent snapshot of each plant.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_3
solar_panel
I need a faster way to see yearly energy production. Create a materialized view called `mv_yearly_plant_yield`. It should calculate the total kilowatt-hours produced by each plant for each year and store it, but only use records that actually have a yield value. The view should have the plant's name, the year, and the total yield.
Create a materialized view named `mv_yearly_plant_yield` which summarizes the total energy yield for each plant for each year. It should include the site label, the year, and the total energy yield in kwh, only including records where the energy yield is not null.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_4
solar_panel
Let's build a cleaning schedule table. Call it `panel_cleaning_schedule`. It needs a unique ID for each entry, the plant's ID, the date it was last cleaned, and the date it's due next. Then, fill it up for all our plants using the latest cleaning info from their mechanical health reports to calculate the next due date.
Create a new table `panel_cleaning_schedule` with columns `schedule_id` (Primary Key, Serial), `site_key` (Foreign Key to plants), `last_cleaned_date` (Date), and `next_cleaning_due` (Date). Populate it for all plants, setting `last_cleaned_date` to the most recent `last_clean_date` from `mechanical_condition` and `next_cleaning_due` by adding the `cleaning_cycle_days` to that date.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_5
solar_panel
I want a tool to quickly tell me how old a plant is. Can you create a function called `get_plant_age`? You give it a plant's ID, and it should spit out its current age in years.
Create a function `get_plant_age` that takes a site key as input and returns the age of the plant in years (as a real number) based on its go-live date and the current date.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_6
solar_panel
I want a 'hall of fame' for extreme weather events at our plants. Can you make a view called `v_environmental_extremes`? It should find the highest ambient temperature, strongest wind speed, and most intense uv index ever recorded across all sites. For each of these records, show which plant it happened at, what the record-breaking value was, and when it happened.
Create a view `v_environmental_extremes` which, for each environmental variable, shows the plant site label, the value, and the timestamp for the all-time maximum recorded value. Include ambient temperature, wind speed, and uv index.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_7
solar_panel
Let's make a log of all our plants that aren't up to code. Create a table called `compliance_issues` with an id, the plant's id, a space for a description, and the date it was logged. After you create it, go through the main plants list and add an entry for every single one that's failed its compliance checks (ignoring case). You can just put 'Initial non-compliance record' for the description.
Create a new table `compliance_issues` with columns for `issue_id`, `plant_sitekey`, `issue_description`, and `date_logged`. Then, insert a record for every plant that has failed to meet its regulatory standards, based on a case-insensitive check of its compliance flag, using the specific description 'Initial non-compliance record'.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_8
solar_panel
I need a new place to keep track of our plant's health stats. Can you create a table called `plant_kpi_summary`? It should have columns for the site's id, its age in years, its annual performance drop, and its uptime percentage.
Create a new table named `plant_kpi_summary` to store key performance indicators. The table should include a key for the site (text, primary key), the plant's age in years (real), its annual degradation rate (real), and its system availability (real).
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_9
solar_panel
Let's make a quick-look list of the absolute worst problems. Create a view, call it `v_critical_alerts_details`, for every alert that's got the highest possible priority for both a maintenance dispatch and a part replacement. Make sure you find them regardless of case. Show me the plant name, when it happened, and the event count.
Create a view named `v_critical_alerts_details` that lists the site label, the snapshot timestamp, and the alert count for all snapshots where the issue is so severe it has been assigned the maximum priority level for both maintenance and replacement (checks performed case-insensitively).
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
solar_panel_M_10
solar_panel
I want to start logging all our repair jobs. Can you set up a new table for me called `maintenance_log`? It needs a unique id for each entry, a reference to the snapshot it's related to, the date of the repair, a description of what was done, and how much it cost. Make sure the snapshot reference actually links to a real record.
Create a new table `maintenance_log` with columns `log_id` (serial primary key), `snap_reference` (text), `log_date` (date), `action_taken` (text), and `cost` (numeric(10, 2)). Add a foreign key on `snap_reference` to the `plant_record` table.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
hulushows_1
hulushows
Let’s check which shows have tons of content across different releases but no written description. Add up their standard content (episodes, clips, etc.) across all tiers, keep only the ones with over 500 total, and no annotations. Show each show’s ID, name, and total volumeβ€”sorted by volume, highest first.
I want to identify all Incomplete High-Engagement Titles. Compute the total content volume for each title by summing up standard content quantities across all distribution records. Then check whether the title has any descriptive annotation. Can you only include titles with a high total volume (greater than 500) and no annotations? List each title's ID, name, and total content volume, sorted by volume in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_2
hulushows
I want to find shows that show up in three or more different subscription tiers. For each show, can you count how many unique tiers it’s available in? First, keep the ones that are in at least three tiers, and then sort the results from the most widely distributed to the last.
I want to know all Multitier Syndicated Shows. For each show with at least three tiers, show its unique identifier and the number of tiers it appears in. Sort the results by tier count in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_3
hulushows
Let’s find out which titles are getting strong user scores even though they don’t have any trailers or clips. I want to look across all content and find the highest user rating among those that don’t offer any visual previews but still include a valid score. Just return that one number, rounded to 2 decimalsβ€”it tells us how well these visually sparse titles are performing.
My goal is to identify the Highly Rated but Visually Empty titles in the catalog. Specifically, I want to calculate the highest user rating among all titles that have no available trailers or clips but still include valid user score data.Give me the maximum user score across these titles, rounded to 2 decimals
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
hulushows_4
hulushows
I want to find out how long it's been since each show got any new updates. For each show, check the most recent update date. But if there's no update info, just use the launch date instead. Then, I’d like to see how many days it's been since that date, and treat that as the staleness score. If a show is available in multiple tiers, take the smallest one. Can you show the show ID and the number of days it's been stale? Finally, sort the list so the stalest showsβ€”that is, the ones that haven't been updated in the longest timeβ€”come first.
For each show, I need to measure the Temporal Staleness Index (TSI). Please determine how many days have passed since the show last had any updates. If no update timestamp is available, use the launch date as a fallback. I’d like to see the show ID along with its staleness index, and the minimum value of this index across all its distribution tiers. Sort the results so that the shows with the highest staleness appear first.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_5
hulushows
How many titles are spread across over six nested genre tags and lean more on short clips, including both general clips and film-related clips, than full-length features?
Count how many shows meet the Over-Fragmented Offering classification in the catalog.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
hulushows_6
hulushows
Let’s all find groups of shows that belong to the same franchise. Can you only include franchises that have at least two shows? For each group, can you show me the franchise ID, how many shows it has, and list the show titles? Also, I need to sort the list so that the biggest franchises with the most shows come first.
Please find all franchise groups. For each group with at least two shows, list the franchise ID, total show count, and the list of show titles. Sort the results by show count in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_7
hulushows
I want to find out how many episodes there are on average in each season for every show. Can you look at shows where we know both the total number of episodes and how many seasons they have. For each one, give me the show ID, how many episodes it has, how many seasons, and the average episodes per season. Please skip anything where the season count is missing or zero. Finally, show the ones with the highest average first.
Please calculate the average number of episodes per season for each show. Can you only include shows with both episode and season counts? For each, list the show ID, total episodes, total seasons, and the episode-to-season ratio. Importantly, exclude entries with missing or zero seasons. Sort results by the ratio in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": true }
hulushows_8
hulushows
Let’s figure out what the most frequent top-end maturity rating is across all the shows. Basically, I want to scan all the records, grab the maturity info, and tell me which of those high-end ratings pops up the most. Just return the one that shows up the most often.
To support catalog analysis, compute the Most Common Peak TV Rating across All Distribution Records. It should consider all available distributiondata, extract their rating information, and determine the single most frequently assigned rating value. Give me a single text result representing the most common rating.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_9
hulushows
Which franchises are producing the most content? Group shows in the same franchise and add up their episodes. Some episode counts may be text or invalid β€” after trimming whitespace, parse only digit strings (digits 0–9 only) and treat the rest as zero. Show only franchises with more than 100 total episodes, listing the identifier, number of shows, and total episodes from largest to smallest.
Generate a Franchise Engagement Summary by grouping shows that belong to the same franchise. The episode count field may be stored as text and can include non-numeric values; after trimming whitespace, parse only digit strings (digits 0–9 only) and treat everything else as zero. Only include franchises whose total number of episodes exceeds 100. For each franchise, provide its identifier, the number of shows it contains, and the combined episode count, sorted by total episodes in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_10
hulushows
Let’s see how our shows are spread out across the different subscription plans. For each plan, I want to know how many titles it has and what chunk of the full catalog that is. Just give me the plan name, the total count of media in it, and what percentage of the catalog that represents. Start with the plans that have the biggest share of content.
Determine the Tier Distribution Ratio to understand how media content is shared across different access levels. First, sum up the total media volume available under each tier. Then compute the overall media total across all tiers. For each tier, calculate its share of the total by dividing the tier’s media volume by the grand total. List the tier ID, tier type, media total, and its Tier Distribution Ratio. Sort the results by the ratio in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 4, "distinct": false, "order": true }
hulushows_11
hulushows
Let’s see which franchises are really making waves across different subscription levels. We’re looking for those that have at least 3 shows, and those shows appear across 3 or more tiers. For each of these franchise powerhouses, show me the franchise ID, how many shows they’ve got, and how many tiers they show up in. Sort the list by number of shows to spotlight the most widely spread ones first.
To evaluate Syndicated Franchise Engagement, we need to check which franchise groups have both a strong show count and wide distribution. For each franchise, count how many shows belong to it and how many unique distribution tiers those shows appear in. These shows should include franchises with at least 3 shows and presence in 3 or more tiers. List the franchise ID, number of shows, and number of tiers, ordered by show count in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_12
hulushows
Let’s dive into the main genre types that keep popping up in our show catalog. I’m only interested in shows labeled as Drama, Comedy, or Animation and Cartoons. For each of those, can you pull together a quick list that includes the show’s ID, its title, and what genre it’s tagged under? Sort the list by title.
We want to analyze Primary Genre Classification across our show catalog. For this, filter and retrieve all titles that fall under the Drama, Comedy, or Animation and Cartoons categories. For each matching title, show its unique ID, name, and its primary genre type. Sort the results alphabetically by title.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_13
hulushows
I want to look at how packed each show’s video library is. Can you pull up a list that shows the total number of video items for each show and group them into three levels? Label them High if they’ve got over 500 videos, Medium if they’re between 200 and 500, and Low if they’re under 200. Let’s sort the list so the shows with the most content show up first, and include the show ID, total count, and the volume level tag.
For each show, compute its total number of video items and classify it using the Content Volume Level Classification. Return the show ID, total volume, and the resulting volume category, ordered by total volume from highest to lowest.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_14
hulushows
Which show feels the most crammed with promotional stuff? Just give me the one with the heaviest promo presence overall.
Find the Maximum Promo Saturation Ratio across all shows in the catalog.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
hulushows_15
hulushows
How many shows land in our usual user-score bucketsβ€”Low, Medium, or High? Just give me the total.
Report the total number of shows whose user scores fall into the standard Low, Medium, or High buckets.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
hulushows_16
hulushows
I want to find shows that show up in three or more different subscription tiers. For each show, can you count how many unique tiers it’s available in? First, keep the ones that are in at least three tiers, and then sort the results from the most widely distributed to the last.
I want to know all Multitier Syndicated Shows. For each show with at least three tiers, show its unique identifier and the number of tiers it appears in. Sort the results by tier count in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_17
hulushows
Let’s grab the shows where the bigger of their trailer or feature count is over 100. Show the ID, title, and that number, sorted from highest to lowest.
Find shows whose Peak Media Load is greater than 100. Give me the show ID, title, and the peak value, sorted from highest to lowest.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_18
hulushows
I want to see how shows rank based on what viewers think. Just group them by how well they’re rated, ignore anything without a proper score, and tell me the show ID, name, how it scored, and which group it ended up inβ€”start from the highest-rated and go down.
Analyze show-level user ratings to assign each show to its corresponding Episode Rating Band. Only include shows with valid numeric scores. For each show, return its ID, title, user score, and band, sorted from highest to lowest score.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 8, "distinct": false, "order": true }
hulushows_19
hulushows
Which shows actually have film clips? List the ones with the most film-related clips first. For each show, show the title, how many film clips it has, and a quick flag for Has Clips or No Clips.
I want to check film-clip availability for each show. For every show, return its ID, title, the number of film-related clips, and a flag saying Has Clips if that count is greater than 0, otherwise No Clips. Sort from highest to lowest film-clip count.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_20
hulushows
Let’s see which shows are loading up on promo messages. For each one, count availability updates, promo messages, alerts, and expiration notices across the free and member tiers. Only include shows with at least one note, and list them starting with the most.
Show the Promotional Intensity Summary for each show with at least one note. Include the show ID and the total count, sorted descending.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_1
hulushows
Let’s drop in a new show using these exact values: make the ID 900001, set the official name to new-show-canonical, call it New Show Title, link it to series 99999999, tag it to studio 8, and add the note β€˜This is a newly added show for fall season release.’ For genres, store a JSON with score 4.25, type show, main genre Science Fiction, and breakdown Science Fiction~Space|Adventure. Once that’s saved, return what you added.
Add a brand-new show with these exact details: ID 900001, official name new-show-canonical, title New Show Title, series 99999999, studio 8, and the note This is a newly added show for fall season release. For its genre info, save a JSON that has a score 4.25, type show, main genre Science Fiction, and a breakdown Science Fiction~Space|Adventure. After saving, show me the inserted record.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
hulushows_M_2
hulushows
So, which studios are really cranking out the content? Let’s create a function called calculate_studio_activity_index that tells us how many entries a studio has in the system. Just pass in the studio’s ID, and it’ll return the total number of catalog records linked to that studioβ€”even if some titles repeat. Simple enough, right? Oh, and while we’re at itβ€”find the show with ID 54 and update its official name to β€˜updated-family-guy’.
Create a PostgreSQL function called calculate_studio_activity_index that computes the Studio Activity Index and returns the calculated value. The function takes one parameter: the unique identifier of a studio. It calculates the total number of content records that are associated with the given studio in the catalog, counting all entries regardless of whether the titles repeat. The result is an integer representing the count of all such records. Additionally, update the canonical name of a specific show in the catalog. Locate the show using its unique content key, which is 54, and set its canonical name to 'updated-family-guy'.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
hulushows_M_3
hulushows
Let’s check how much content each subscription gets. Just give me the plan nameβ€”like β€œfree” or β€œsubscriber”—and I’ll count all the shows linked to it. Don’t worry about casing or spaces; it should match even if someone types it differently.
Create a function that returns the number of unique shows available under a given subscription plan like "free" or "subscriber". Match the plan name in a case-insensitive and trimmed way to ensure accurate mapping. Return the total number of linked shows.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_4
hulushows
Let’s check how many titles belong to a given series. Just pass in a series ID, and we’ll return the total number of titles linked to that series.
We need to calculate the number of distinct titles that belong to a specific series to support the Series Entry Count metric. Given a series identifier as input, the system should return a single integer representing how many entries are part of that series.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_5
hulushows
Let’s see how our shows break down by age-appropriatenessβ€”like β€œTV-Y”, β€œTV-PG”, etc. Just group them and count how many land in each level, making sure different casing or extra spaces are treated the same.
Could you help me get a quick overview of how shows are distributed across different TV Rating types? For each rating, return how many shows fall under it, normalizing the rating values by lowercasing and trimming to avoid mismatches.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_6
hulushows
I want to know if all shows in a series share the same name? Just use check_series_title_uniformity with the series ID. it returns true if the titles match across the board, false if they don’t.
A function named check_series_title_uniformity is required. This function determines the Series Title Uniformity Flag for a given series. It checks whether all shows linked to the same series share an identical canonical title. The output is a boolean valueβ€”true if all titles match, and false otherwise.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_7
hulushows
Let’s figure out which studios have been the busiest. For each one, can you show me how many titles they’ve worked on? Just include the studios that are actually linked to content, and sort the list so the most active ones show up first. I need this saved as a permanent table called studio_catalog_size.
We need to create a persistent table of all Studio Catalog Size data for our content analysis. Please set up a table called studio_catalog_size that includes each studio’s unique identifier and the total number of titles linked to that studio. The count should be grouped by studio and sorted from the most prolific to the least. Please note only include entries that are explicitly associated with a studio.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_8
hulushows
Let’s figure out which studios have been the busiest in the catalog and save it in a table called title_count_per_studio. For each one, can you show me their ID, name, and how many shows they’ve worked on? Only count the ones that are actually linked to a studio. We’ll need to pull the studio info by joining the show records with the studio list. Then, sort the results so the studios with the most titles show up first.
Let’s build a persistent table called title_count_per_studio to analyze Title Count per Studio for catalog assessment. This table should include each studio’s unique ID, its canonical name, and the number of titles linked to it. Only include entries where a valid studio association exists. The result must be grouped by studio and sorted so the most prolific studios appear first. Join is required between the show catalog and the studio registry. The output will be a structured table listing studio ID, studio name, and how many titles are attributed to each.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_9
hulushows
Set up a permanent table called avg_title_length_per_studio so we can track how long each studio’s show titles usually are. It should include which studio it is and the average number of characters in the titles of its shows. We’re only defining the structure for avg_title_length_per_studio right nowβ€”no data yet.
Please create a permanent table named avg_title_length_per_studio to track the average length of show titles per production studio. The table must have two columns: (1) the studio’s unique ID and (2) the average number of characters in titles of shows linked to that studio. This step only defines the schema for avg_title_length_per_studioβ€”do not insert any data.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
hulushows_M_10
hulushows
Let’s check how busy our release schedule was in a particular year. I need a function that takes in a year and tells me how many shows were launched during that time. It should go through the catalog and count only the shows whose launch dates fall in that year, but only for test titles with srkeys 900001 and 900002. Please don’t include the rest of the system’s data. The result should just be a number showing how many of those selected titles came out in that year.
Create a function named get_launch_count_by_year that computes the Launch Year Distribution for a specific year. This function analyzes the release history by counting how many titles were launched in the specified year. It operates over the catalog of shows, using each show's recorded launch timestamp, and filters to only include test data with srkeys in (900001, 900002). The output is a single integer indicating the number of titles launched in that year.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_1
cybermarket_pattern
Give me all platforms sorted by its risk score, most dangerous on top and show 4 digits.
List each marketplace with its Marketplace Risk Score (MRS), rounded to 4 decimal places, highest first.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 4, "distinct": false, "order": true }
cybermarket_pattern_M_1
cybermarket_pattern
Mark every seller who's currently being investigated or getting a lot of attention from authorities as β€œHigh” on the compliance scale, leave the already-High ones alone, and give me the IDs that changed.
Set the compliance category to β€œHigh” for all sellers with an active investigation or high attention from authorities, skipping those already at β€œHigh”. Return the IDs of the sellers that were updated.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_M_2
cybermarket_pattern
Add a daily review entry for each sale the model rates over 70% fraud risk and doesn't already have one.
Create a daily review entry for every transaction with model-assessed fraud probability above 70% that currently has no review entry.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_M_3
cybermarket_pattern
Purge the top-priority alert cases that are resolved and whose next review date is over 180 days old.
Delete alert cases at the highest escalation level that are resolved and have a next review date more than 180 days ago.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_M_4
cybermarket_pattern
Save the current list of sites that meet the security rule, along with their computed rating, into a fresh archiveβ€”replace any prior archive.
Archive the current list of Secure Platforms together with their Marketplace Risk Score, replacing any existing archive if present.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_2
cybermarket_pattern
Split shoppers into three risk-per-dollar groups; for each group, show how many shoppers there are, what fraction of their orders go across countries, and how often their sessions look highly/medium/low hidden.
Group buyers into three buckets based on Buyer Risk Dollar Ratio; for each bucket, return the buyer count, the share of their transactions that are cross-border, and the distribution of session anonymity (High/Medium/Low).
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
cybermarket_pattern_3
cybermarket_pattern
Give me a list of sellers with their transaction flow scores, plus details about how complicated their shipping networks are.
List vendors along with their Platform Liquidity Rate (PLR), including metrics related to Shipping Route Complexity.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
cybermarket_pattern_4
cybermarket_pattern
Give me how fast each session processed threats, and the levels of login verification for buyers.
Provide Threat Handling Rate (THR) for each security session, ordered from highest to lowest. Additionally, include metrics related to Buyer Authentication Levels.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
cybermarket_pattern_5
cybermarket_pattern
I want to know the keyword-hitting values for all customer and internal chats to identify high-risk patterns. Round to 3 decimal places and show in descending order
Calculate Suspicion Signal Density (SSD) for every communication thread, rounded to 3 decimal places and shown in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
cybermarket_pattern_M_5
cybermarket_pattern
Update table statistics and query plans for the vendors table, focusing on improving efficiency-related query performance.
Analyze the vendors table to refresh statistics for Compliance Efficiency Index (CEI) queries.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_6
cybermarket_pattern
Show me all protected platforms, whether they're up or down, how many serious escalation cases they have, and how bad their current alerts are.
List all Secure Platforms and their current operational status. Also include metrics related to Tier-3 Escalation Case and Alert Severity Levels.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_7
cybermarket_pattern
Tell me how many live listings we have in each category, along with which ones have weird descriptions and how many sketchy buyers are interacting with them.
Count active listings for each Product Category, shown in descending order. Besides, show metrics related to Language Patterns, Suspicious Buyer.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
cybermarket_pattern_8
cybermarket_pattern
Break down transactions by how complicated their shipping routes were, then show me the counts with the trickiest routes at the top.
Show the number of transactions per Shipping Route Complexity label, highest first.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
cybermarket_pattern_9
cybermarket_pattern
Tell me how the average security score stacks up across sessions with different privacy levels, rounded to 2 decimal places, from totally open to fully masked connections.
List average OpSec score for each Session Anonymity Level, rounded to 2 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": true }
cybermarket_pattern_M_6
cybermarket_pattern
I need to optimize the database for cross-border transaction lookups - could you create a dedicated index for those searches?
Create an index to speed up searches for Cross-Border Transactions.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_10
cybermarket_pattern
I want to know the average keyword-hitting values for all customer and internal chats to identify high-risk patterns. Round to 3 decimal places.
Return the average Suspicion Signal Density (SSD) across all communications, rounded to 3 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": false }
cybermarket_pattern_M_7
cybermarket_pattern
Make a table called 'suspicious_buyers_cap' that lists all the shady buyers, but only include ones that hit at least $10 in suspicious activity.
Create table suspicious_buyers_cap listing Suspicious Buyers with a $10 cap.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_M_8
cybermarket_pattern
I need to mandate sessions secured by two factor across the board. Please configure the system to upgrade any active sessions still relying on basic authentication.
Force Premium Authentication by setting auth_protocol_type to "2FA" for every session that is currently using "Basic".
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_11
cybermarket_pattern
I need the total number of transactions that were both marked as fraud and involved cross-border payments.
Count Fraud-Flagged Transactions that are Cross-Border.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_12
cybermarket_pattern
Calculate how many hours we typically take to close Tier-3 escalations. Show the average value, rounded to hundredths.
Return the average resolve time in hours for Tier-3 Escalation Cases, rounded to 2 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cybermarket_pattern_13
cybermarket_pattern
How many platforms show as 'active' right now?
Count platforms currently marked as Active.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_M_9
cybermarket_pattern
Show me where our response is slowestβ€”give me a quick breakdown by key groups, a percentile snapshot, and the 50 slowest sessions.
Analyze connection_security to optimize Threat Handling Rate reports.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_14
cybermarket_pattern
How many shoppers are using advanced authentication?
Count buyers who have Advanced authentication.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_15
cybermarket_pattern
What's the overall revenue from digital goods? Round the result to 2 decimal places.
Sum total sales value for Digital product listings, rounded to 2 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cybermarket_pattern_16
cybermarket_pattern
What's the average distance traveled for shipments with complex routes? Round the result to 2 decimal places.
Compute the average geographical distance for shipments on complex routes and round the result to two decimals.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cybermarket_pattern_M_10
cybermarket_pattern
Set up the secure-platform snapshotβ€”only create it if it isn't there yet.
Create the secure-platform summary materialized view if it does not already exist.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_17
cybermarket_pattern
How many critical alerts do we have?
Count alerts with Critical severity level.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_18
cybermarket_pattern
What's the ratio of sales went through escrow? Round to 2 decimal places.
Calculate the ratio of transactions that used escrow, rounded to 2 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cybermarket_pattern_19
cybermarket_pattern
How many message threads contain irregular phrasing, sudden language switches, or machine translated text that indicate possible deception?
Count communication threads with Suspicious language patterns.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cybermarket_pattern_20
cybermarket_pattern
How many buyers have unpredictable spending trends?
Count buyers with Variable spend pattern.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
archeology_scan_1
archeology_scan
I'd like to see which of our dig sites have the best scan quality ratings. Could you show me each site's ID and name along with their average quality score, sorted best to worst?
I'd like to see a quality assessment of scans across our archaeological sites. Show site code, site name, average Scan Quality Score for each site and rank them from highest to lowest quality.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": true }
archeology_scan_2
archeology_scan
Which sites need urgent conservation work? Please show me each location's ID, name, structural condition, preservation status, and whether they're in a high-risk category.
Could you help me find archaeological sites that might need urgent conservation attention? I'm particularly interested in identifying sites that fall into Degradation Risk Zones. For each site, I'd like to see their code, name, structural state, and preservation status, along with their Risk Zone Category. This information would help our conservation team prioritize their efforts.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
archeology_scan_3
archeology_scan
Where are the best places to do scanning based on weather conditions? Show me each site's ID and name with their average environmental condition score indicating suitability for scanning operations.
I'm planning our upcoming archaeological scanning sessions and want to understand which sites have the most favorable scanning environments. Could you show me a report with each site's code, name, and its average Environmental Suitability Index? This would help us prioritize locations where we'll get the best scan quality.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
archeology_scan_4
archeology_scan
How reliable are our scan alignments? For each alignment record, could you show me the registration accuracy relative to scan resolution and the registration confidence category. I need to see its registration ID, project ID, accuracy measurements, error values, calculated ratio, and the confidence category.
I'm evaluating the quality of our scan registrations and would like to understand which ones are most reliable for spatial analysis. Could you show me the Registration Accuracy Ratio and Registration Confidence Level for each registration? I'd need to see the registration ID, project ID, accuracy measurements, error values, calculated RAR (rounded to 2 decimal places), and what confidence level that translates to.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
archeology_scan_5
archeology_scan
Which archaeologicalsites have the best digital preservation? Rank our locations showing their ID, designation, and a comprehensive metric for evaluating digital preservation quality, with the best first.
For our archaeological site evaluation, I need to quantify the Digital Preservation Quality metrics across our collection. Please compute a comprehensive DPQ index for each archaeological location. Present the results in descending order of DPQ values, displaying only the site identification code, site designation, and calculated DPQ value (rounded to two decimal places) to facilitate prioritization of our digital preservation resources.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
archeology_scan_6
archeology_scan
How good are our 3D models based on the criteria for high-fidelity standard? Please generate a comprehensive report that shows each site's ID, name, total mesh count, high-fidelity mesh count and proportion (as a percentage), average ratio of mesh complexity, average resolution parameters (in mm), average geometric accuracy measurements and Mesh Quality category. Present the data with the highest-fidelity results first.
Would you generate a comprehensive report categorizing sites based on High Fidelity Mesh standard? For each archaeological location, please include the site code, side name, total mesh count, high-fidelity mesh count and proportion (as a percentage), the average Mesh Complexity Ratio, average resolution parameters (in mm), average geometric accuracy measurements and Mesh Quality Classification. The data should be presented in descending order of high-fidelity percentage.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
archeology_scan_7
archeology_scan
What are the scanning conditions like at each site? Show me each location's code and name, along with weather averages (temperature, humidity, and illumination levels), environment suitability score, and corresponding quartile ranking and environmental condition category based on the score.
Show me each site's code and name, along with the average temperature, humidity, and illumination levels. I'd also like to see the average Environmental Suitability Index for each site, classified into quartiles, to understand the range of conditions. Finally, classify each site into Environmental Condition Classification System according to average ESI value.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 1, "distinct": false, "order": true }
archeology_scan_8
archeology_scan
I'd like to analyze how efficiently each scan processing workflow performs and spot any bottlenecks. For every software and stage combination, show me the software, processing stage, average hours needed for processing, average CPU and GPU usage percentages, average data size in GB, the ratio of the processing efficiency, and whether it's running efficiently or hitting bottlenecks ('Bottleneck Detected' if it is qualified as processing bottleneck, 'Efficient' if it is not). Also include how many workflows we're looking at for each combination. Sort the results by bottleneck status first, followed by the ratio value from lowest to highest.
I want to evaluate each scan processing workflow's Processing Efficiency Ratio and identify whether it qualifies as a Processing Bottleneck. For each combination of processing software and stage, please include the software, stage, average processing hours, average CPU and GPU usage percentages, average data size in GB, the average PER value, and the the efficiency status ('Bottleneck Detected' if it is qualified as processing bottleneck, 'Efficient' if it is not). Additionally, provide the total count of workflows for each combination. Sort the results by bottleneck status first, followed by the PER value in ascending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 1, "distinct": false, "order": true }
archeology_scan_9
archeology_scan
Which sites are best for finding artifacts? Show me each location's ID along with the average ratio between total points and cloud density, and the average efficiency of feature identification. I need all sites included, even if some data might be missing. Sort the results by average feature identification efficiency in descending order.
For each archaeological site, I need its Point Cloud Density Ratio and Feature Extraction Efficiency to identify sites with high potential for feature extraction. Please include the site code, average PCDR value, and average FEE value. Ensure that all sites are included, even if some data might be missing. Sort the results by average FEE in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": true }
archeology_scan_10
archeology_scan
Hey, can you help me figure out how efficient our archaeological scanning gear is? I need to know the equipments' IDs, their efficiency of computing resource utilization (rounded to two decimal places), the average processing time in hours, their efficiency rankings, and their workflow efficiency status. Also, please include CPU usage (named 'cpu_usage'), GPU usage (named 'gpu_usage'), and processing hours (named 'processing_hours') as JSON in the resource details. Make sure to include all equipments, even if the data's incomplete, and sort everything by PRU value from lowest to highest. Thanks!
My purpose is to analyze the Processing Resource Utilization (PRU) of our archaeological scanning equipment and categorize workflows according to the Workflow Efficiency Classification system. Please provide the equipments' IDs, PRU values (rounded to two decimal places), average processing time in hours, efficiency rankings, workflow efficiency status, and include the CPU usage (named 'cpu_usage'), GPU usage (named 'gpu_usage'), and processing hours (named 'processing_hours') in json format as resource details. I'd like all equipment to be included in the analysis, even those with incomplete data. Please sort the results by PRU value in ascending order to help identify the most efficient setups.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
End of preview. Expand in Data Studio

πŸš€ LiveSQLBench-Base-Full-v1

A dynamic, contamination‑free benchmark for evaluating LLMs on complex, real‑world text‑to‑SQL tasks.

🌐 Website/Leaderboard β€’ πŸ“„ Paper (coming soon) β€’ πŸ’» GitHub β€’ πŸ—„οΈ LiveSQLBench-Base-Lite

Maintained by the 🦜 BIRD Team @ HKU & ☁️ Google Cloud

πŸ“Š LiveSQLBench Overview

LiveSQLBench (BIRD-SQL Pro v0.5) is a contamination-free, continuously evolving benchmark designed to evaluate LLMs on complex, real-world text-to-SQL tasks, featuring diverse real-world user queries, including Business Intelligence (BI), CRUD operations, and more. Each release will include around 20 new, fully open-source DBs curated by the BIRD team through expert collaboration and continuous improvement. It will cover a wide range of database sizes, from end-user level (around 127 columns) to industrial level (1340+ columns). Here are the features of the LiveSQLBench benchmark:

  1. πŸ—„οΈ Live Databases: Constructed dynamically from extensive and regularly updated CSV datasets, with both base (user-end level) and large (industrial level) versions (1340+ columns each DB) to test scalability.

  2. πŸ’¬ Live User Queries and SQL: Each task pairs unambiguous user queries with annotated, gold-standard SQL statements. The user queries are grounded in an external knowledge base, with medium to hard complexity solution SQL statements.

  3. 🧠 Contextual Reasoning (HKB): Every DB includes a hierarchical knowledge base (HKB) where each knowledge may have dependencies to others, which requires the multi-hop reasoning ability. Two HKB formats are provided: (1) structured JSON format, and (2) unstructured Document format.

  4. πŸ” The First Full SQL Spectrum: Supports not just SELECT (Business Intelligence) queries, but also CRUD (e.g., UPDATE, CREATE, and other database management operations) queries.

  5. ⚑ Automated Evaluation: Support fast evaluation via PostgreSQL template & docker. Each question includes verifiable test cases for accurate, reproducible scoring. Soft EX metric is used to evaluate SELECT-ONLY tasks; customized test cases are designed for DBA tasks, such as CRUD (CREATE, READ, UPDATE, DELETE).

  6. πŸ”„ Truly Live & Hidden Test: New databases and tasks are added over time. Each release features both open development and hidden test phases. The hidden test set from each release becomes the open development set for the next release, ensuring continuous evolution and fair evaluation.

Previous Releases: LiveSQLBench-Base-Lite

🎯 Current Release: LiveSQLBench-Base-Full-v1

Currently, we are pleased to release a LiveSQLBench-Base-Full-v1, containing 22 NEW end-user level databases with 600 NEW tasks (410 SELECT-only, 190 Management tasks), HKB-JSON and the JSON operation in SQL.

Some NEW features:

  • More Natural User Tasks: User tasks are more colloquial and natural, making it implicit to mapping to the DB and KB. Some tasks are even reasoning-intensive. That means the model needs to reason more deeply and multi-hop to solve the task.
  • More Real and Complex DBs: DBs are more real and complex, containing more N2M relationships and more noisy schema and data.

πŸ’» How to Use the Dataset

Get the Dataset and Ground Truth

Download the dataset containing data file livesqlbench_data.jsonl and DB metafiles (including schema, HKB, column meaning files) by:

git clone https://huggingface.co/datasets/birdsql/livesqlbench-base-full-v1

To prevent data leakage through automated crawling, please request access to the ground truth and test cases by emailing πŸ“§ bird.bench25@gmail.com with the subject line [livesqlbench-base-full-v1 GT&Test Cases]. An automated response will provide these data fields.

Get the Database DDL Dumps and Building Scripts

The complete PostgreSQL database dumps and building scripts (init-databases_postgresql.sh) can be download from the Google Drive.

Evaluation

The details of usage and evaluation can be referred to livesqlbench repo.

πŸ“ Directory Structure

Each database has its own directory:

.
β”œβ”€β”€ README.md
β”œβ”€β”€ database_name
β”‚   β”œβ”€β”€ database_name_column_meaning_base.json
β”‚   β”œβ”€β”€ database_name_kb.jsonl
β”‚   β”œβ”€β”€ database_name_schema.txt
...
β”œβ”€β”€ livesqlbench_data.jsonl

πŸ“‚ Directory Contents:

  • *_schema.txt: Database schema.
  • *_kb.jsonl: Hierarchical knowledge base entries required to solve the user task.
    • id: The unique identifier for the knowledge.
    • knowledge: The name of the knowledge.
    • description: The description of the knowledge.
    • definition: The clear definition of the knowledge.
    • type: The type of the knowledge.
    • children_knowledge: A list of knowledge IDs that the current knowledge is dependent on. -1 means no children.
  • *_column_meaning_base.json: Explanation of database columns.

πŸ“‹ Dataset Fields (livesqlbench_data.jsonl):

  • instance_id: Unique task identifier.
  • selected_database: Associated database name.
  • query: More natural user query (which is used in evaluation and our leaderboard).
  • normal_query: The normal user query, which is more concise and direct. Just for reference.
  • sol_sql πŸ”’: Ground truth SQL solution.
  • external_knowledge πŸ”’: IDs of required external knowledge to solve the user task.
  • preprocess_sql: SQL setup queries.
  • clean_up_sql: SQL queries to reset database state.
  • test_cases πŸ”’: Test cases to validate the predicted corrected SQL.
  • category: "Query" (SELECT-only) or "Management" (CRUD).
  • high_level: Boolean indicating whether the user query contains high-level description.
  • conditions: Indicates decimal/distinct conditions in the user query.
  • difficulty_tier: Task difficulty (Simple, Moderate, Challenging).

πŸ”’ Accessing Complete Data

To avoid data leakage by auto-crawling, certain fields (e.g., sol_sql, test_cases, external_knowledge) are excluded from the public dataset. For the full dataset, please email: πŸ“§ bird.bench25@gmail.com with subject tag [livesqlbench-base-full-v1 GT&Test Cases], which will be sent automatically.

πŸ† Model Performance on LiveSQLBench-Base-Full-v1 (2025-09-04)

Please refer to our homepage: 🌐 LiveSQLBench

πŸ”„ Stay Tuned!

Upcoming releases:

  • πŸ”„ LiveSQLBench-Large-Lite: Industrial-scale databases with 1340+ columns.
  • πŸ”„ LiveSQLBench-Large-Full: Comprehensive large-scale datasets.

Want new dialects? Vote for new SQL dialects πŸ—³οΈ here!

πŸ“„ License:

cc-by-sa-4.0


license: cc-by-sa-4.0

Downloads last month
236