klamike commited on
Commit
a739c45
·
verified ·
1 Parent(s): aa1dda1

Convert dataset to Parquet (part 00007-of-00008) (#8)

Browse files

- Convert dataset to Parquet (part 00007-of-00008) (724aaee82510c1dd35ac0b8600b84c4265ef28a4)
- Delete loading script (3a45cfdf9b90f80fd550715cb1200a6842e2153b)
- Delete data file (68caa58f181a4172cd22e9cd52eda7b50476d368)
- Delete data file (4fa81b1a3d5bc82f15a3707c5c83efeeb279ef9c)
- Delete data file (6439e4e521e28f868cf02fe122553efe056edbea)
- Delete data file (fb076badae21625ef105f8caa976aad01f6c9f81)
- Delete data file (d59524d3395c3e991f845b89368e141e0533da2c)
- Delete data file (7d6a7deaf1acebaf4b67a39aeb3dd8d3d37541d0)
- Delete data file (ef2bec654d55846497c8e75a9bffe05f1420e614)
- Delete data file (4cd31de2f6aa404ca77c4c3671d60dd6e812169d)
- Delete data file (1a58331201aff58be4857779b02e88f8f4498bde)
- Delete data file (7dcd1c292f39cebee9eab2d976ce7af9a09e19db)
- Delete data file (580528b1ddab0415cc0d8a9e72ed566a5c2b44a5)
- Delete data file (17b9a0dea52c1ba993d74af0b2aa554813dced71)
- Delete data file (dab70e4bfdc307c1df13d67941d2e7111712f1cf)
- Delete data file (0925d89487e1b44cd78fd92abd537627e4bc5dee)
- Delete data file (52f9db7145d67d81abde7bdad12a4c4d60bbe609)
- Delete data file (e68a5dff4d956259778c35e7ebed360fa2cde364)
- Delete data file (e9a21b38679f392f9f747d649b7578e63e352184)
- Delete data file (926ed3aabd100ec96309076c6a79d18127a8edb3)
- Delete data file (6fc7b76915eb17abe133ccdbd87b28397408b0ff)
- Delete data file (468b19eab2baea77c41428a79ee324560130769a)
- Delete data file (b757a30ceb4c1cf8aec57c025d014f2564304574)
- Delete data file (2ded8bad22bb9b6343b5a86c00a4f29398bb6e0c)
- Delete data file (1be2c8b35f584fb91922329baa8238795c12be0e)
- Delete data file (67fcfc05a0273d72acd1c813ef27837b847df2be)
- Delete data file (99bb6c2795679597eb95289967a56223b0b14128)
- Delete data file (f50c4e344745f1bc0e17a2131665e5dd17d9253e)
- Delete data file (d4571ee9b9f064ffcafd39a76c185186b04c6944)
- Delete data file (2cfb52a17d5c15fb5b7fcc8e6028b97a5ef5dc91)
- Delete data file (9f9ae03e5d8e62667b063772d7cbcd67dc02c820)
- Delete data file (185c6929e1f6249e76b2b2cdbba99cda5304ed7d)
- Delete data file (e20789553088d2489f2462e957b453bfe5c3b09a)
- Delete data file (c8c9db045bcc45c5e435359a46b858f2aff9df53)
- Delete data file (2db0e0d17f3c09359d6db57e716940087733f76a)
- Delete data file (2000787365b299a06a3493dd9d25e76e5ceead29)
- Delete data file (b5006fe2a777fc20cfe10acf6cad831ffabbc202)
- Delete data file (10b0d1581297e03b8fe2616a8c7a9fc2c819bea3)
- Delete data file (d70c422cbbe2aefe2eea9d964ef289351dd2f4ef)
- Delete data file (1fb8924972c5bcf55a199b7b5bf947ec70abe042)

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. infeasible/DCOPF/dual.h5.gz → 6470_rte/test-00053-of-00075.parquet +2 -2
  2. case.json.gz → 6470_rte/test-00054-of-00075.parquet +2 -2
  3. infeasible/ACOPF/meta.h5.gz → 6470_rte/test-00055-of-00075.parquet +2 -2
  4. infeasible/DCOPF/meta.h5.gz → 6470_rte/test-00056-of-00075.parquet +2 -2
  5. 6470_rte/test-00057-of-00075.parquet +3 -0
  6. 6470_rte/test-00058-of-00075.parquet +3 -0
  7. 6470_rte/test-00059-of-00075.parquet +3 -0
  8. 6470_rte/test-00060-of-00075.parquet +3 -0
  9. 6470_rte/test-00061-of-00075.parquet +3 -0
  10. 6470_rte/test-00062-of-00075.parquet +3 -0
  11. 6470_rte/test-00063-of-00075.parquet +3 -0
  12. 6470_rte/test-00064-of-00075.parquet +3 -0
  13. 6470_rte/test-00065-of-00075.parquet +3 -0
  14. 6470_rte/test-00066-of-00075.parquet +3 -0
  15. 6470_rte/test-00067-of-00075.parquet +3 -0
  16. 6470_rte/test-00068-of-00075.parquet +3 -0
  17. 6470_rte/test-00069-of-00075.parquet +3 -0
  18. 6470_rte/test-00070-of-00075.parquet +3 -0
  19. 6470_rte/test-00071-of-00075.parquet +3 -0
  20. 6470_rte/test-00072-of-00075.parquet +3 -0
  21. 6470_rte/test-00073-of-00075.parquet +3 -0
  22. 6470_rte/test-00074-of-00075.parquet +3 -0
  23. PGLearn-Large-6470_rte.py +0 -429
  24. README.md +9 -1
  25. config.toml +0 -42
  26. data/pglearn/9241_pegase/slurm/logs/OPF.4261836-45.out +0 -9
  27. data/pglearn/9241_pegase/slurm/logs/OPF.4261836-46.out +0 -9
  28. data/pglearn/9241_pegase/slurm/logs/OPF.4261836-47.out +0 -9
  29. data/pglearn/9241_pegase/slurm/logs/OPF.4261836-48.out +0 -9
  30. data/pglearn/9241_pegase/slurm/logs/OPF.4261836-49.out +0 -9
  31. infeasible/ACOPF/dual.h5.gz +0 -3
  32. infeasible/ACOPF/primal.h5.gz +0 -3
  33. infeasible/DCOPF/primal.h5.gz +0 -3
  34. infeasible/SOCOPF/dual.h5.gz +0 -3
  35. infeasible/SOCOPF/meta.h5.gz +0 -3
  36. infeasible/SOCOPF/primal.h5.gz +0 -3
  37. infeasible/input.h5.gz +0 -3
  38. test/ACOPF/dual.h5.gz +0 -3
  39. test/ACOPF/meta.h5.gz +0 -3
  40. test/ACOPF/primal.h5.gz +0 -3
  41. test/DCOPF/dual.h5.gz +0 -3
  42. test/DCOPF/meta.h5.gz +0 -3
  43. test/DCOPF/primal.h5.gz +0 -3
  44. test/SOCOPF/dual.h5.gz +0 -3
  45. test/SOCOPF/meta.h5.gz +0 -3
  46. test/SOCOPF/primal.h5.gz +0 -3
  47. test/input.h5.gz +0 -3
  48. train/ACOPF/dual.h5.gz +0 -3
  49. train/ACOPF/meta.h5.gz +0 -3
  50. train/ACOPF/primal.h5.gz +0 -3
infeasible/DCOPF/dual.h5.gz → 6470_rte/test-00053-of-00075.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d9299d6ea96f792f752deaee790f4168fc2b31b112a3288756f761435624f1e
3
- size 163514808
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86ac05d829760dc17fda8004398ed33ff5197edffaaaec878d8b790b7130e4ef
3
+ size 492977115
case.json.gz → 6470_rte/test-00054-of-00075.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a5bcccfae90e14f99b4556bd737be8c8a32409848255413030b7fd44c582538
3
- size 5183519
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df4aafffb871134a3851a069de31a3d1c23f33109229b077a79262272bba409e
3
+ size 492855168
infeasible/ACOPF/meta.h5.gz → 6470_rte/test-00055-of-00075.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:65191c0dd3463a074d8dbdce83589601dd6c4321973761e1f3c72b41beec632f
3
- size 287007
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f22cf005704caf4f80699b8d42bc86d51f11de393bc595e1ddd65644cebf9751
3
+ size 493031332
infeasible/DCOPF/meta.h5.gz → 6470_rte/test-00056-of-00075.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2273bc34d1cfe93d8722c9234a34247892b29f57bd616d177b2c3f451b3da2c9
3
- size 273341
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d65f394f156a73275ce30cde12e3b2370ebcba648f45edab61139c7092542309
3
+ size 492951346
6470_rte/test-00057-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a79337392cda3f43ab432adcc7e5c43465df94bae52af896f6202046ee8c2c18
3
+ size 492886870
6470_rte/test-00058-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:534e3efd445c7b37bfa2a85e0390571b673abf45e717f144707879570ffe7e6d
3
+ size 492939578
6470_rte/test-00059-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fe62b6dcd00360f1314e9499a2c3c22381bec52540e571a409d075a4b22e34d
3
+ size 493037032
6470_rte/test-00060-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33d3581b210698f362bb4e4ee932ea93cea8ddc9e7e29775364e6f4c6735517d
3
+ size 492930855
6470_rte/test-00061-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37f8c249b89a022761ae4ea44906e4f50117272703b1bd96769ce0740854d950
3
+ size 492925622
6470_rte/test-00062-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:046789c143103c82faf3caf8d574176ecfb76f76824e6a912df64bef8533d2d1
3
+ size 492910007
6470_rte/test-00063-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c0b36d8c45c3c6e4ae7557a40803147d600e44ff1f54b9c4f5c1fc58676b044
3
+ size 493108978
6470_rte/test-00064-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5414a53486874fb5bafe147d7721a6ccfeb08be1357b00d0c8a17a19f3a9e177
3
+ size 492911906
6470_rte/test-00065-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:597784a5ca3d5a74e7fad1a1bed5d2c8e90b6ee69af17ad4dc08676d633e94f4
3
+ size 492925317
6470_rte/test-00066-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e997c90a743b7938825edf3f00f9d47a53656a2bf97c439b74daa5f5e013de5
3
+ size 492921962
6470_rte/test-00067-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8f8aa391efd01e012c22dd36978198e47621ff8c0b71cd5dac70fa2582455b1
3
+ size 492869293
6470_rte/test-00068-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09e7c4f58b4534e54edc9f4b070d41f4e4c1b2f5e4132cadfad248cf75f60ca8
3
+ size 493019632
6470_rte/test-00069-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dcdcdf167efa0c4c9da28a9c0f3024b78c8bf143a93c939e7ad563f238f2141
3
+ size 492813169
6470_rte/test-00070-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:821d5a8b5c47d61f0dd08b1176538ded8443a6607c4bfceaf76144b1072ba8c4
3
+ size 492926960
6470_rte/test-00071-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08edaff41e630094e951d4c044c1e13993a24bbb7e4f285fe75a76fc95790b55
3
+ size 492839396
6470_rte/test-00072-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5e08015251f116ffe52746fccfa0376a0de39cee35072a7baa1ee029b63d477
3
+ size 493084853
6470_rte/test-00073-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc49b3c0f810c264af876d588636aa826ad95947c6b12156ec296333c398d2e0
3
+ size 492997438
6470_rte/test-00074-of-00075.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c4abea55aeda06c10d362a23875bc2d39f95496035ed174867fe97e567039e4
3
+ size 492958698
PGLearn-Large-6470_rte.py DELETED
@@ -1,429 +0,0 @@
1
- from __future__ import annotations
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- import json
5
- import shutil
6
-
7
- import datasets as hfd
8
- import h5py
9
- import pgzip as gzip
10
- import pyarrow as pa
11
-
12
- # ┌──────────────┐
13
- # │ Metadata │
14
- # └──────────────┘
15
-
16
- @dataclass
17
- class CaseSizes:
18
- n_bus: int
19
- n_load: int
20
- n_gen: int
21
- n_branch: int
22
-
23
- CASENAME = "6470_rte"
24
- SIZES = CaseSizes(n_bus=6470, n_load=3670, n_gen=761, n_branch=9005)
25
- NUM_TRAIN = 73912
26
- NUM_TEST = 18478
27
- NUM_INFEASIBLE = 7628
28
- SPLITFILES = {
29
- "train/SOCOPF/dual.h5.gz": ["train/SOCOPF/dual/xaa", "train/SOCOPF/dual/xab"],
30
- }
31
-
32
- URL = "https://huggingface.co/datasets/PGLearn/PGLearn-Large-6470_rte"
33
- DESCRIPTION = """\
34
- The 6470_rte PGLearn optimal power flow dataset, part of the PGLearn-Large collection. \
35
- """
36
- VERSION = hfd.Version("1.0.0")
37
- DEFAULT_CONFIG_DESCRIPTION="""\
38
- This configuration contains feasible input, primal solution, and dual solution data \
39
- for the ACOPF, DCOPF, and SOCOPF formulations on the {case} system. For case data, \
40
- download the case.json.gz file from the `script` branch of the repository. \
41
- https://huggingface.co/datasets/PGLearn/PGLearn-Large-6470_rte/blob/script/case.json.gz
42
- """
43
- USE_ML4OPF_WARNING = """
44
- ================================================================================================
45
- Loading PGLearn-Large-6470_rte through the `datasets.load_dataset` function may be slow.
46
-
47
- Consider using ML4OPF to directly convert to `torch.Tensor`; for more info see:
48
- https://github.com/AI4OPT/ML4OPF?tab=readme-ov-file#manually-loading-data
49
-
50
- Or, use `huggingface_hub.snapshot_download` and an HDF5 reader; for more info see:
51
- https://huggingface.co/datasets/PGLearn/PGLearn-Large-6470_rte#downloading-individual-files
52
- ================================================================================================
53
- """
54
- CITATION = """\
55
- @article{klamkinpglearn,
56
- title={{PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow}},
57
- author={Klamkin, Michael and Tanneau, Mathieu and Van Hentenryck, Pascal},
58
- year={2025},
59
- }\
60
- """
61
-
62
- IS_COMPRESSED = True
63
-
64
- # ┌──────────────────┐
65
- # │ Formulations │
66
- # └──────────────────┘
67
-
68
- def acopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
69
- features = {}
70
- if primal: features.update(acopf_primal_features(sizes))
71
- if dual: features.update(acopf_dual_features(sizes))
72
- if meta: features.update({f"ACOPF/{k}": v for k, v in META_FEATURES.items()})
73
- return features
74
-
75
- def dcopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
76
- features = {}
77
- if primal: features.update(dcopf_primal_features(sizes))
78
- if dual: features.update(dcopf_dual_features(sizes))
79
- if meta: features.update({f"DCOPF/{k}": v for k, v in META_FEATURES.items()})
80
- return features
81
-
82
- def socopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
83
- features = {}
84
- if primal: features.update(socopf_primal_features(sizes))
85
- if dual: features.update(socopf_dual_features(sizes))
86
- if meta: features.update({f"SOCOPF/{k}": v for k, v in META_FEATURES.items()})
87
- return features
88
-
89
- FORMULATIONS_TO_FEATURES = {
90
- "ACOPF": acopf_features,
91
- "DCOPF": dcopf_features,
92
- "SOCOPF": socopf_features,
93
- }
94
-
95
- # ┌───────────────────┐
96
- # │ BuilderConfig │
97
- # └───────────────────┘
98
-
99
- class PGLearnLarge6470_rteConfig(hfd.BuilderConfig):
100
- """BuilderConfig for PGLearn-Large-6470_rte.
101
- By default, primal solution data, metadata, input, casejson, are included for the train and test splits.
102
-
103
- To modify the default configuration, pass attributes of this class to `datasets.load_dataset`:
104
-
105
- Attributes:
106
- formulations (list[str]): The formulation(s) to include, e.g. ["ACOPF", "DCOPF"]
107
- primal (bool, optional): Include primal solution data. Defaults to True.
108
- dual (bool, optional): Include dual solution data. Defaults to False.
109
- meta (bool, optional): Include metadata. Defaults to True.
110
- input (bool, optional): Include input data. Defaults to True.
111
- casejson (bool, optional): Include case.json data. Defaults to True.
112
- train (bool, optional): Include training samples. Defaults to True.
113
- test (bool, optional): Include testing samples. Defaults to True.
114
- infeasible (bool, optional): Include infeasible samples. Defaults to False.
115
- """
116
- def __init__(self,
117
- formulations: list[str],
118
- primal: bool=True, dual: bool=False, meta: bool=True, input: bool = True, casejson: bool=True,
119
- train: bool=True, test: bool=True, infeasible: bool=False,
120
- compressed: bool=IS_COMPRESSED, **kwargs
121
- ):
122
- super(PGLearnLarge6470_rteConfig, self).__init__(version=VERSION, **kwargs)
123
-
124
- self.case = CASENAME
125
- self.formulations = formulations
126
-
127
- self.primal = primal
128
- self.dual = dual
129
- self.meta = meta
130
- self.input = input
131
- self.casejson = casejson
132
-
133
- self.train = train
134
- self.test = test
135
- self.infeasible = infeasible
136
-
137
- self.gz_ext = ".gz" if compressed else ""
138
-
139
- @property
140
- def size(self):
141
- return SIZES
142
-
143
- @property
144
- def features(self):
145
- features = {}
146
- if self.casejson: features.update(case_features())
147
- if self.input: features.update(input_features(SIZES))
148
- for formulation in self.formulations:
149
- features.update(FORMULATIONS_TO_FEATURES[formulation](SIZES, self.primal, self.dual, self.meta))
150
- return hfd.Features(features)
151
-
152
- @property
153
- def splits(self):
154
- splits: dict[hfd.Split, dict[str, str | int]] = {}
155
- if self.train:
156
- splits[hfd.Split.TRAIN] = {
157
- "name": "train",
158
- "num_examples": NUM_TRAIN
159
- }
160
- if self.test:
161
- splits[hfd.Split.TEST] = {
162
- "name": "test",
163
- "num_examples": NUM_TEST
164
- }
165
- if self.infeasible:
166
- splits[hfd.Split("infeasible")] = {
167
- "name": "infeasible",
168
- "num_examples": NUM_INFEASIBLE
169
- }
170
- return splits
171
-
172
- @property
173
- def urls(self):
174
- urls: dict[str, None | str | list] = {
175
- "case": None, "train": [], "test": [], "infeasible": [],
176
- }
177
-
178
- if self.casejson:
179
- urls["case"] = f"case.json" + self.gz_ext
180
- else:
181
- urls.pop("case")
182
-
183
- split_names = []
184
- if self.train: split_names.append("train")
185
- if self.test: split_names.append("test")
186
- if self.infeasible: split_names.append("infeasible")
187
-
188
- for split in split_names:
189
- if self.input: urls[split].append(f"{split}/input.h5" + self.gz_ext)
190
- for formulation in self.formulations:
191
- if self.primal:
192
- filename = f"{split}/{formulation}/primal.h5" + self.gz_ext
193
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
194
- else: urls[split].append(filename)
195
- if self.dual:
196
- filename = f"{split}/{formulation}/dual.h5" + self.gz_ext
197
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
198
- else: urls[split].append(filename)
199
- if self.meta:
200
- filename = f"{split}/{formulation}/meta.h5" + self.gz_ext
201
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
202
- else: urls[split].append(filename)
203
- return urls
204
-
205
- # ┌────────────────────┐
206
- # │ DatasetBuilder │
207
- # └────────────────────┘
208
-
209
- class PGLearnLarge6470_rte(hfd.ArrowBasedBuilder):
210
- """DatasetBuilder for PGLearn-Large-6470_rte.
211
- The main interface is `datasets.load_dataset` with `trust_remote_code=True`, e.g.
212
-
213
- ```python
214
- from datasets import load_dataset
215
- ds = load_dataset("PGLearn/PGLearn-Large-6470_rte", trust_remote_code=True,
216
- # modify the default configuration by passing kwargs
217
- formulations=["DCOPF"],
218
- dual=False,
219
- meta=False,
220
- )
221
- ```
222
- """
223
-
224
- DEFAULT_WRITER_BATCH_SIZE = 10000
225
- BUILDER_CONFIG_CLASS = PGLearnLarge6470_rteConfig
226
- DEFAULT_CONFIG_NAME=CASENAME
227
- BUILDER_CONFIGS = [
228
- PGLearnLarge6470_rteConfig(
229
- name=CASENAME, description=DEFAULT_CONFIG_DESCRIPTION.format(case=CASENAME),
230
- formulations=list(FORMULATIONS_TO_FEATURES.keys()),
231
- primal=True, dual=True, meta=True, input=True, casejson=False,
232
- train=True, test=True, infeasible=False,
233
- )
234
- ]
235
-
236
- def _info(self):
237
- return hfd.DatasetInfo(
238
- features=self.config.features, splits=self.config.splits,
239
- description=DESCRIPTION + self.config.description,
240
- homepage=URL, citation=CITATION,
241
- )
242
-
243
- def _split_generators(self, dl_manager: hfd.DownloadManager):
244
- hfd.logging.get_logger().warning(USE_ML4OPF_WARNING)
245
-
246
- filepaths = dl_manager.download_and_extract(self.config.urls)
247
-
248
- splits: list[hfd.SplitGenerator] = []
249
- if self.config.train:
250
- splits.append(hfd.SplitGenerator(
251
- name=hfd.Split.TRAIN,
252
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["train"]), n_samples=NUM_TRAIN),
253
- ))
254
- if self.config.test:
255
- splits.append(hfd.SplitGenerator(
256
- name=hfd.Split.TEST,
257
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["test"]), n_samples=NUM_TEST),
258
- ))
259
- if self.config.infeasible:
260
- splits.append(hfd.SplitGenerator(
261
- name=hfd.Split("infeasible"),
262
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["infeasible"]), n_samples=NUM_INFEASIBLE),
263
- ))
264
- return splits
265
-
266
- def _generate_tables(self, case_file: str | None, data_files: tuple[hfd.utils.track.tracked_str | list[hfd.utils.track.tracked_str]], n_samples: int):
267
- case_data: str | None = json.dumps(json.load(open_maybe_gzip_cat(case_file))) if case_file is not None else None
268
- data: dict[str, h5py.File] = {}
269
- for file in data_files:
270
- v = h5py.File(open_maybe_gzip_cat(file), "r")
271
- if isinstance(file, list):
272
- k = "/".join(Path(file[0].get_origin()).parts[-3:-1]).split(".")[0]
273
- else:
274
- k = "/".join(Path(file.get_origin()).parts[-2:]).split(".")[0]
275
- data[k] = v
276
- for k in list(data.keys()):
277
- if "/input" in k: data[k.split("/", 1)[1]] = data.pop(k)
278
-
279
- batch_size = self._writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
280
- for i in range(0, n_samples, batch_size):
281
- effective_batch_size = min(batch_size, n_samples - i)
282
-
283
- sample_data = {
284
- f"{dk}/{k}":
285
- hfd.features.features.numpy_to_pyarrow_listarray(v[i:i + effective_batch_size, ...])
286
- for dk, d in data.items() for k, v in d.items() if f"{dk}/{k}" in self.config.features
287
- }
288
-
289
- if case_data is not None:
290
- sample_data["case/json"] = pa.array([case_data] * effective_batch_size)
291
-
292
- yield i, pa.Table.from_pydict(sample_data)
293
-
294
- for f in data.values():
295
- f.close()
296
-
297
- # ┌──────────────┐
298
- # │ Features │
299
- # └──────────────┘
300
-
301
- FLOAT_TYPE = "float32"
302
- INT_TYPE = "int64"
303
- BOOL_TYPE = "bool"
304
- STRING_TYPE = "string"
305
-
306
- def case_features():
307
- # FIXME: better way to share schema of case data -- need to treat jagged arrays
308
- return {
309
- "case/json": hfd.Value(STRING_TYPE),
310
- }
311
-
312
- META_FEATURES = {
313
- "meta/seed": hfd.Value(dtype=INT_TYPE),
314
- "meta/formulation": hfd.Value(dtype=STRING_TYPE),
315
- "meta/primal_objective_value": hfd.Value(dtype=FLOAT_TYPE),
316
- "meta/dual_objective_value": hfd.Value(dtype=FLOAT_TYPE),
317
- "meta/primal_status": hfd.Value(dtype=STRING_TYPE),
318
- "meta/dual_status": hfd.Value(dtype=STRING_TYPE),
319
- "meta/termination_status": hfd.Value(dtype=STRING_TYPE),
320
- "meta/build_time": hfd.Value(dtype=FLOAT_TYPE),
321
- "meta/extract_time": hfd.Value(dtype=FLOAT_TYPE),
322
- "meta/solve_time": hfd.Value(dtype=FLOAT_TYPE),
323
- }
324
-
325
- def input_features(sizes: CaseSizes):
326
- return {
327
- "input/pd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
328
- "input/qd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
329
- "input/gen_status": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=BOOL_TYPE)),
330
- "input/branch_status": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=BOOL_TYPE)),
331
- "input/seed": hfd.Value(dtype=INT_TYPE),
332
- }
333
-
334
- def acopf_primal_features(sizes: CaseSizes):
335
- return {
336
- "ACOPF/primal/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
337
- "ACOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
338
- "ACOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
339
- "ACOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
340
- "ACOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
341
- "ACOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
342
- "ACOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
343
- "ACOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
344
- }
345
- def acopf_dual_features(sizes: CaseSizes):
346
- return {
347
- "ACOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
348
- "ACOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
349
- "ACOPF/dual/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
350
- "ACOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
351
- "ACOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
352
- "ACOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
353
- "ACOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
354
- "ACOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
355
- "ACOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
356
- "ACOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
357
- "ACOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
358
- "ACOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
359
- "ACOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
360
- "ACOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
361
- "ACOPF/dual/sm_fr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
362
- "ACOPF/dual/sm_to": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
363
- "ACOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
364
- }
365
- def dcopf_primal_features(sizes: CaseSizes):
366
- return {
367
- "DCOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
368
- "DCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
369
- "DCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
370
- }
371
- def dcopf_dual_features(sizes: CaseSizes):
372
- return {
373
- "DCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
374
- "DCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
375
- "DCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
376
- "DCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
377
- "DCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
378
- "DCOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
379
- }
380
- def socopf_primal_features(sizes: CaseSizes):
381
- return {
382
- "SOCOPF/primal/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
383
- "SOCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
384
- "SOCOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
385
- "SOCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
386
- "SOCOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
387
- "SOCOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
388
- "SOCOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
389
- "SOCOPF/primal/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
390
- "SOCOPF/primal/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
391
- }
392
- def socopf_dual_features(sizes: CaseSizes):
393
- return {
394
- "SOCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
395
- "SOCOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
396
- "SOCOPF/dual/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
397
- "SOCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
398
- "SOCOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
399
- "SOCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
400
- "SOCOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
401
- "SOCOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
402
- "SOCOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
403
- "SOCOPF/dual/jabr": hfd.Array2D(shape=(sizes.n_branch, 4), dtype=FLOAT_TYPE),
404
- "SOCOPF/dual/sm_fr": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
405
- "SOCOPF/dual/sm_to": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
406
- "SOCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
407
- "SOCOPF/dual/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
408
- "SOCOPF/dual/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
409
- "SOCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
410
- "SOCOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
411
- "SOCOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
412
- "SOCOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
413
- }
414
-
415
- # ┌─────────��─────┐
416
- # │ Utilities │
417
- # └───────────────┘
418
-
419
- def open_maybe_gzip_cat(path: str | list):
420
- if isinstance(path, list):
421
- dest = Path(path[0]).parent.with_suffix(".h5")
422
- if not dest.exists():
423
- with open(dest, "wb") as dest_f:
424
- for piece in path:
425
- with open(piece, "rb") as piece_f:
426
- shutil.copyfileobj(piece_f, dest_f)
427
- shutil.rmtree(Path(piece).parent)
428
- path = dest.as_posix()
429
- return gzip.open(path, "rb") if path.endswith(".gz") else open(path, "rb")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -288,6 +288,14 @@ dataset_info:
288
  - name: test
289
  num_bytes: 37010930475
290
  num_examples: 18478
291
- download_size: 160191271379
292
  dataset_size: 185054652373
 
 
 
 
 
 
 
 
293
  ---
 
288
  - name: test
289
  num_bytes: 37010930475
290
  num_examples: 18478
291
+ download_size: 185038153518
292
  dataset_size: 185054652373
293
+ configs:
294
+ - config_name: 6470_rte
295
+ data_files:
296
+ - split: train
297
+ path: 6470_rte/train-*
298
+ - split: test
299
+ path: 6470_rte/test-*
300
+ default: true
301
  ---
config.toml DELETED
@@ -1,42 +0,0 @@
1
- # Name of the reference PGLib case. Must be a valid PGLib case name.
2
- pglib_case = "pglib_opf_case6470_rte"
3
- floating_point_type = "Float32"
4
-
5
- [sampler]
6
- # data sampler options
7
- [sampler.load]
8
- noise_type = "ScaledUniform"
9
- l = 0.6 # Lower bound of base load factor
10
- u = 1.0 # Upper bound of base load factor
11
- sigma = 0.20 # Relative (multiplicative) noise level.
12
-
13
-
14
- [OPF]
15
-
16
- [OPF.ACOPF]
17
- type = "ACOPF"
18
- solver.name = "Ipopt"
19
- solver.attributes.tol = 1e-6
20
- solver.attributes.linear_solver = "ma27"
21
-
22
- [OPF.DCOPF]
23
- # Formulation/solver options
24
- type = "DCOPF"
25
- solver.name = "HiGHS"
26
-
27
- [OPF.SOCOPF]
28
- type = "SOCOPF"
29
- solver.name = "Clarabel"
30
- # Tight tolerances
31
- solver.attributes.tol_gap_abs = 1e-6
32
- solver.attributes.tol_gap_rel = 1e-6
33
- solver.attributes.tol_feas = 1e-6
34
- solver.attributes.tol_infeas_rel = 1e-6
35
- solver.attributes.tol_ktratio = 1e-6
36
- # Reduced accuracy settings
37
- solver.attributes.reduced_tol_gap_abs = 1e-6
38
- solver.attributes.reduced_tol_gap_rel = 1e-6
39
- solver.attributes.reduced_tol_feas = 1e-6
40
- solver.attributes.reduced_tol_infeas_abs = 1e-6
41
- solver.attributes.reduced_tol_infeas_rel = 1e-6
42
- solver.attributes.reduced_tol_ktratio = 1e-6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/pglearn/9241_pegase/slurm/logs/OPF.4261836-45.out DELETED
@@ -1,9 +0,0 @@
1
- ---------------------------------------
2
- Begin Slurm Prolog: May-03-2025 23:20:55
3
- Job ID: 4261837
4
- User ID: mklamkin3
5
- Account: gts-phentenryck3-coda20
6
- Job name: OPF
7
- Partition: cpu-small
8
- QOS: embers
9
- ---------------------------------------
 
 
 
 
 
 
 
 
 
 
data/pglearn/9241_pegase/slurm/logs/OPF.4261836-46.out DELETED
@@ -1,9 +0,0 @@
1
- ---------------------------------------
2
- Begin Slurm Prolog: May-03-2025 23:20:55
3
- Job ID: 4261838
4
- User ID: mklamkin3
5
- Account: gts-phentenryck3-coda20
6
- Job name: OPF
7
- Partition: cpu-small
8
- QOS: embers
9
- ---------------------------------------
 
 
 
 
 
 
 
 
 
 
data/pglearn/9241_pegase/slurm/logs/OPF.4261836-47.out DELETED
@@ -1,9 +0,0 @@
1
- ---------------------------------------
2
- Begin Slurm Prolog: May-03-2025 23:20:55
3
- Job ID: 4261839
4
- User ID: mklamkin3
5
- Account: gts-phentenryck3-coda20
6
- Job name: OPF
7
- Partition: cpu-small
8
- QOS: embers
9
- ---------------------------------------
 
 
 
 
 
 
 
 
 
 
data/pglearn/9241_pegase/slurm/logs/OPF.4261836-48.out DELETED
@@ -1,9 +0,0 @@
1
- ---------------------------------------
2
- Begin Slurm Prolog: May-03-2025 23:20:55
3
- Job ID: 4261840
4
- User ID: mklamkin3
5
- Account: gts-phentenryck3-coda20
6
- Job name: OPF
7
- Partition: cpu-small
8
- QOS: embers
9
- ---------------------------------------
 
 
 
 
 
 
 
 
 
 
data/pglearn/9241_pegase/slurm/logs/OPF.4261836-49.out DELETED
@@ -1,9 +0,0 @@
1
- ---------------------------------------
2
- Begin Slurm Prolog: May-03-2025 23:20:55
3
- Job ID: 4261836
4
- User ID: mklamkin3
5
- Account: gts-phentenryck3-coda20
6
- Job name: OPF
7
- Partition: cpu-small
8
- QOS: embers
9
- ---------------------------------------
 
 
 
 
 
 
 
 
 
 
infeasible/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9cc7721d1899210d8f2511ca281d56adce608033cbfe4552aa3ef0c92ba10c8a
3
- size 3279205926
 
 
 
 
infeasible/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:53314e3232ade5ae54ed226cb786fc92f3adec30d90340fd05edf513525670dc
3
- size 1383280253
 
 
 
 
infeasible/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e369fddc2117691d30efed97fd2a12b952927f04fc5fd35afe7e81c354e5e2f1
3
- size 208432520
 
 
 
 
infeasible/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f9fbf1d778c89225ba88e14d433020f331e6e43c94bb20a561610118ea65abdb
3
- size 5698644882
 
 
 
 
infeasible/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb75a457b358fb1b76de1a64ba5f28c406747334199062f38b562a68c039241e
3
- size 269976
 
 
 
 
infeasible/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:22e5a544f9015a6109806cc6d3a664990775e3f4384a9580334f73efb7887edb
3
- size 1472197148
 
 
 
 
infeasible/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1becbfc0458db85fa688dc34282bdeddb91b8650a872925eaf15b49e80e0f14a
3
- size 202109735
 
 
 
 
test/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a9a741e9f8664c2a72ed97e07d243fc978f9f7089b2ce88a25c949e2bd42074d
3
- size 7298150074
 
 
 
 
test/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8932327179af34efc8b6c0f6e98087646ebcf25f5ed24bea20f7542955a8c860
3
- size 634973
 
 
 
 
test/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2d621b285d1a0293b401060507e39255d3aef972aa5936e8618ad38235f1c765
3
- size 3303038711
 
 
 
 
test/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5622255fb05d1015d6634745818fac8801a4c30e60d5c6ef4a42da6e339df021
3
- size 758845639
 
 
 
 
test/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cd0007122327925991cbe8edf7c97231e9b10edc528eb3087249f9fd582f1006
3
- size 630592
 
 
 
 
test/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d98b173559ae2cb7cb49faaa26a5a45edac129056008c53e50a672b97fdebf82
3
- size 985626601
 
 
 
 
test/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:82e4b2b56e2cfeedac285f510be53af3569fa8757bc5e3e9dafa3097cfcef9a1
3
- size 13493024102
 
 
 
 
test/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:60ce99a04e8e4aaf8e631db27fbfd9d8261675e92923f88dad2133f8d21bd314
3
- size 635115
 
 
 
 
test/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:950a28e7c0176ce623ca9063b82fdcb16a7225a93d2f27cad738d9d6d2e482dc
3
- size 4082946361
 
 
 
 
test/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:eaab5fe6642793cc7b8733cc8008b31f75e6463b6035ffaf3bfa7b8aa019d834
3
- size 489639546
 
 
 
 
train/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4d72cc0f17693e9a0160e4a3b1e1ce1b7070da95fb6e7265aa827fbe78029a7f
3
- size 29192858819
 
 
 
 
train/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a94933714ce872f3d7f60a76bb22f7bbdb2796d46b2ecbaecf44e3d1e3308d1e
3
- size 2501217
 
 
 
 
train/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d7b4102860f1f03cdb43faa9cf034711f27106e9895d07b7a94f1a8ea55f19a7
3
- size 13212119576