31 Commits

Author SHA1 Message Date
Matthew Evans
3c1446ff07 Merge pull request #119 from d-cogswell/master
Fix deprecated numpy aliases which were removed in 2.0.0
2024-07-31 15:54:27 +01:00
Dan Cogswell
e18a21ffbc Reverses 79e3df0 which pins numpy version. 2024-07-31 10:18:47 -04:00
Dan Cogswell
260ad72a6e Fix deprecated numpy aliases which were removed in numpy version 2.0.0. 2024-07-30 10:55:48 -04:00
Matthew Evans
7d264999db Merge pull request #118 from echemdata/ml-evs/lfs
LFS workaround using archived releases in CI
2024-07-12 15:29:02 +01:00
Matthew Evans
1e53de56ef LFS note formatting and location in README 2024-07-12 14:33:53 +01:00
Matthew Evans
f44851ec37 Add flake8 skip 2024-07-12 14:31:15 +01:00
Matthew Evans
3b5dc48fc6 Add LFS warning note 2024-07-12 14:31:14 +01:00
Matthew Evans
56bebfe498 Replace failing lfs caching with downloading test files from release tarballs 2024-07-12 14:31:11 +01:00
Matthew Evans
d33c6f7561 Merge pull request #117 from echemdata/ml-evs/pin-numpy
Add upper numpy pin
2024-07-12 13:20:03 +01:00
Matthew Evans
79e3df0ed9 Add upper numpy pin 2024-07-12 12:45:02 +01:00
3c904db04e Merge pull request #105 from echemdata/ml-evs/arbin-in-memory
Optionally read Arbin into in-memory sqlite without temporary file
2024-03-03 10:32:30 +02:00
Matthew Evans
fbc90fc961 Update tests/test_Arbin.py
Co-authored-by: Chris Kerr <chris.kerr@mykolab.ch>
2024-03-02 18:13:40 +01:00
545a82ec35 Bump version to 0.4.1
I forgot to update the version before tagging 0.4.0 so I will have to
tag a 0.4.1 release instead.
2024-03-02 16:29:59 +02:00
7c37ea306b Merge pull request #107 from echemdata/ml-evs/analog-in-fix
Add `Analog IN <n>/V` columns to map
2024-03-02 16:20:19 +02:00
cd3eaae2c1 Merge pull request #103 from echemdata/ml-evs/preparing-release
Refresh README in preparation for release
2024-03-02 15:46:55 +02:00
Matthew Evans
a9be96b5c2 Fix column name and add explanation 2024-02-29 09:40:54 +00:00
Matthew Evans
0c2ecd42ca Duplicate 'ANALOG IN 1/V' to allow reading 2024-02-26 11:44:26 +00:00
Matthew Evans
a845731131 Optionally read Arbin into in-memory sqlite without temporary file 2024-02-12 10:55:52 +00:00
Matthew Evans
6d2a5b31fb Refresh the README with installation instructions and an arbin snippet 2024-02-12 10:39:09 +00:00
1fd9f8454a Merge pull request #97 from chatcannon/JhonFlash-master
Add support for EC-Lab v11.50

Rebased from #95 by @JhonFlash3008
2024-02-06 21:43:54 +02:00
f0177f2470 Merge pull request #101 from echemdata/ml-evs/attempt-to-cache-lfs
Attempt to cache LFS in GH actions
2024-02-06 21:42:10 +02:00
Matthew Evans
ea50999349 Bump setup-python to v5 2024-02-03 21:23:31 +01:00
Matthew Evans
88d1fc3a71 Attempt to cache LFS in GH actions 2024-02-03 21:15:10 +01:00
4971f2b550 Apply review comments 2024-02-03 14:24:03 +02:00
5cdc620f16 Fix flake8 lint 2024-02-03 14:00:16 +02:00
Jonathan Schillings
7a6ac1c542 added tests for v11.50 2024-02-03 13:53:23 +02:00
46f296f61f Merge branch 'master' into JhonFlash-master 2024-02-03 13:51:43 +02:00
aa0aee6128 Merge pull request #99 from chatcannon/mdbtools-1-0
Update regular expression for mdbtools 1.0 output
2024-02-03 13:47:06 +02:00
dbd01957db Use newer Ubuntu image for CI tests
We no longer need to use an old Ubuntu image with old mdbtools version.
2024-01-20 23:41:43 +02:00
13957160f8 Update regular expression for mdbtools 1.0 output
The output formatting has changed - it now puts multiple data rows in a
single INSERT statement, and also changes the quoting of text data.
2024-01-20 23:39:41 +02:00
jschilli
77d56290d4 Added support for v11.50 :
Few modifications in the VMPdata_dtype_from_colIDs
Added new headers VMPmodule_hdr_v2
Modified MPRfile initialization

Includes squashed linting fixes by @ml-evs
2024-01-20 22:24:09 +02:00
21 changed files with 359 additions and 42 deletions

View File

@@ -17,8 +17,7 @@ jobs:
pytest: pytest:
name: Run Python unit tests name: Run Python unit tests
# Note that 20.04 is currently required until galvani supports mdbtools>=1.0. runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
strategy: strategy:
fail-fast: false fail-fast: false
@@ -27,13 +26,33 @@ jobs:
python-version: ['3.8', '3.9', '3.10', '3.11'] python-version: ['3.8', '3.9', '3.10', '3.11']
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
lfs: true lfs: false
# Due to limited LFS bandwidth, it is preferable to download
# test files from the last release.
#
# This does mean that testing new LFS files in the CI is tricky;
# care should be taken to also test new files locally first
# Tests missing these files in the CI should still fail.
- name: Download static files from last release for testing
uses: robinraju/release-downloader@v1
with:
latest: true
tarBall: false
fileName: "galvani-*.gz"
zipBall: false
out-file-path: /home/runner/work/last-release
extract: true
- name: Copy test files from static downloaded release
run: |
cp -r /home/runner/work/last-release/*/tests/testdata tests
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4 uses: actions/setup-python@v5
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
@@ -50,5 +69,5 @@ jobs:
tox -vv --notest tox -vv --notest
- name: Run all tests - name: Run all tests
run: | run: |-
tox --skip-pkg-install tox --skip-pkg-install

View File

@@ -7,21 +7,76 @@ SPDX-FileCopyrightText: 2013-2020 Christopher Kerr, Peter Attia
SPDX-License-Identifier: GPL-3.0-or-later SPDX-License-Identifier: GPL-3.0-or-later
--> -->
Read proprietary file formats from electrochemical test stations Read proprietary file formats from electrochemical test stations.
## Bio-Logic .mpr files ## # Usage
## Bio-Logic .mpr files
Use the `MPRfile` class from BioLogic.py (exported in the main package) Use the `MPRfile` class from BioLogic.py (exported in the main package)
```` ```python
from galvani import BioLogic from galvani import BioLogic
import pandas as pd import pandas as pd
mpr_file = BioLogic.MPRfile('test.mpr') mpr_file = BioLogic.MPRfile('test.mpr')
df = pd.DataFrame(mpr_file.data) df = pd.DataFrame(mpr_file.data)
```` ```
## Arbin .res files ## ## Arbin .res files
Use the res2sqlite.py script to convert the .res file to a sqlite3 database Use the `./galvani/res2sqlite.py` script to convert the .res file to a sqlite3 database with the same schema, which can then be interrogated with external tools or directly in Python.
with the same schema. For example, to extract the data into a pandas DataFrame (will need to be installed separately):
```python
import sqlite3
import pandas as pd
from galvani.res2sqlite import convert_arbin_to_sqlite
convert_arbin_to_sqlite("input.res", "output.sqlite")
with sqlite3.connect("output.sqlite") as db:
df = pd.read_sql(sql="select * from Channel_Normal_Table", con=db)
```
This functionality requires [MDBTools](https://github.com/mdbtools/mdbtools) to be installed on the local system.
# Installation
The latest galvani releases can be installed from [PyPI](https://pypi.org/project/galvani/) via
```shell
pip install galvani
```
The latest development version can be installed with `pip` directly from GitHub (see note about git-lfs below):
```shell
GIT_LFS_SKIP_SMUDGE=1 pip install git+https://github.com/echemdata/galvani
```
## Development installation and contributing
> [!WARNING]
>
> This project uses Git Large File Storage (LFS) to store its test files,
> however the LFS quota provided by GitHub is frequently exceeded.
> This means that anyone cloning the repository with LFS installed will get
> failures unless they set the `GIT_LFS_SKIP_SMUDGE=1` environment variable when
> cloning.
> The full test data from the last release can always be obtained by
> downloading the GitHub release archives (tar or zip), at
> https://github.com/echemdata/galvani/releases/latest
>
> If you wish to add test files, please ensure they are as small as possible,
> and take care that your tests work locally without the need for the LFS files.
> Ideally, you could commit them to your fork when making a PR, and then they
> can be converted to LFS files as part of the review.
If you wish to contribute to galvani, please clone the repository and install the testing dependencies:
```shell
git clone git@github.com:echemdata/galvani
cd galvani
pip install -e .\[tests\]
```
Code can be contributed back via [GitHub pull requests](https://github.com/echemdata/galvani/pulls) and new features or bugs can be discussed in the [issue tracker](https://github.com/echemdata/galvani/issues).

View File

@@ -48,8 +48,15 @@ def fieldname_to_dtype(fieldname):
"|Z|/Ohm", "|Z|/Ohm",
"Re(Z)/Ohm", "Re(Z)/Ohm",
"-Im(Z)/Ohm", "-Im(Z)/Ohm",
"Re(M)",
"Im(M)",
"|M|",
"Re(Permittivity)",
"Im(Permittivity)",
"|Permittivity|",
"Tan(Delta)",
): ):
return (fieldname, np.float_) return (fieldname, np.float64)
elif fieldname in ( elif fieldname in (
"Q charge/discharge/mA.h", "Q charge/discharge/mA.h",
"step time/s", "step time/s",
@@ -59,15 +66,15 @@ def fieldname_to_dtype(fieldname):
"Efficiency/%", "Efficiency/%",
"Capacity/mA.h", "Capacity/mA.h",
): ):
return (fieldname, np.float_) return (fieldname, np.float64)
elif fieldname in ("cycle number", "I Range", "Ns", "half cycle"): elif fieldname in ("cycle number", "I Range", "Ns", "half cycle", "z cycle"):
return (fieldname, np.int_) return (fieldname, np.int_)
elif fieldname in ("dq/mA.h", "dQ/mA.h"): elif fieldname in ("dq/mA.h", "dQ/mA.h"):
return ("dQ/mA.h", np.float_) return ("dQ/mA.h", np.float64)
elif fieldname in ("I/mA", "<I>/mA"): elif fieldname in ("I/mA", "<I>/mA"):
return ("I/mA", np.float_) return ("I/mA", np.float64)
elif fieldname in ("Ewe/V", "<Ewe>/V", "Ecell/V"): elif fieldname in ("Ewe/V", "<Ewe>/V", "Ecell/V", "<Ewe/V>"):
return ("Ewe/V", np.float_) return ("Ewe/V", np.float64)
elif fieldname.endswith( elif fieldname.endswith(
( (
"/s", "/s",
@@ -86,11 +93,17 @@ def fieldname_to_dtype(fieldname):
"/F", "/F",
"/mF", "/mF",
"/uF", "/uF",
"/µF",
"/nF",
"/C", "/C",
"/Ohm", "/Ohm",
"/Ohm-1",
"/Ohm.cm",
"/mS/cm",
"/%",
) )
): ):
return (fieldname, np.float_) return (fieldname, np.float64)
else: else:
raise ValueError("Invalid column header: %s" % fieldname) raise ValueError("Invalid column header: %s" % fieldname)
@@ -230,7 +243,7 @@ def MPTfileCSV(file_or_path):
return mpt_csv, comments return mpt_csv, comments
VMPmodule_hdr = np.dtype( VMPmodule_hdr_v1 = np.dtype(
[ [
("shortname", "S10"), ("shortname", "S10"),
("longname", "S25"), ("longname", "S25"),
@@ -240,17 +253,30 @@ VMPmodule_hdr = np.dtype(
] ]
) )
VMPmodule_hdr_v2 = np.dtype(
[
("shortname", "S10"),
("longname", "S25"),
("max length", "<u4"),
("length", "<u4"),
("version", "<u4"),
("unknown2", "<u4"), # 10 for set, log and loop, 11 for data
("date", "S8"),
]
)
# Maps from colID to a tuple defining a numpy dtype # Maps from colID to a tuple defining a numpy dtype
VMPdata_colID_dtype_map = { VMPdata_colID_dtype_map = {
4: ("time/s", "<f8"), 4: ("time/s", "<f8"),
5: ("control/V/mA", "<f4"), 5: ("control/V/mA", "<f4"),
6: ("Ewe/V", "<f4"), 6: ("Ewe/V", "<f4"),
7: ("dQ/mA.h", "<f8"), 7: ("dq/mA.h", "<f8"),
8: ("I/mA", "<f4"), # 8 is either I or <I> ?? 8: ("I/mA", "<f4"), # 8 is either I or <I> ??
9: ("Ece/V", "<f4"), 9: ("Ece/V", "<f4"),
11: ("I/mA", "<f8"), 11: ("<I>/mA", "<f8"),
13: ("(Q-Qo)/mA.h", "<f8"), 13: ("(Q-Qo)/mA.h", "<f8"),
16: ("Analog IN 1/V", "<f4"), 16: ("Analog IN 1/V", "<f4"),
17: ("Analog IN 2/V", "<f4"), # Probably column 18 is Analog IN 3/V, if anyone hits this error in the future # noqa: E501
19: ("control/V", "<f4"), 19: ("control/V", "<f4"),
20: ("control/mA", "<f4"), 20: ("control/mA", "<f4"),
23: ("dQ/mA.h", "<f8"), # Same as 7? 23: ("dQ/mA.h", "<f8"), # Same as 7?
@@ -267,7 +293,7 @@ VMPdata_colID_dtype_map = {
39: ("I Range", "<u2"), 39: ("I Range", "<u2"),
69: ("R/Ohm", "<f4"), 69: ("R/Ohm", "<f4"),
70: ("P/W", "<f4"), 70: ("P/W", "<f4"),
74: ("Energy/W.h", "<f8"), 74: ("|Energy|/W.h", "<f8"),
75: ("Analog OUT/V", "<f4"), 75: ("Analog OUT/V", "<f4"),
76: ("<I>/mA", "<f4"), 76: ("<I>/mA", "<f4"),
77: ("<Ewe>/V", "<f4"), 77: ("<Ewe>/V", "<f4"),
@@ -287,8 +313,30 @@ VMPdata_colID_dtype_map = {
169: ("Cs/µF", "<f4"), 169: ("Cs/µF", "<f4"),
172: ("Cp/µF", "<f4"), 172: ("Cp/µF", "<f4"),
173: ("Cp-2/µF-2", "<f4"), 173: ("Cp-2/µF-2", "<f4"),
174: ("Ewe/V", "<f4"), 174: ("<Ewe>/V", "<f4"),
241: ("|E1|/V", "<f4"), 178: ("(Q-Qo)/C", "<f4"),
179: ("dQ/C", "<f4"),
211: ("Q charge/discharge/mA.h", "<f8"),
212: ("half cycle", "<u4"),
213: ("z cycle", "<u4"),
217: ("THD Ewe/%", "<f4"),
218: ("THD I/%", "<f4"),
220: ("NSD Ewe/%", "<f4"),
221: ("NSD I/%", "<f4"),
223: ("NSR Ewe/%", "<f4"),
224: ("NSR I/%", "<f4"),
230: ("|Ewe h2|/V", "<f4"),
231: ("|Ewe h3|/V", "<f4"),
232: ("|Ewe h4|/V", "<f4"),
233: ("|Ewe h5|/V", "<f4"),
234: ("|Ewe h6|/V", "<f4"),
235: ("|Ewe h7|/V", "<f4"),
236: ("|I h2|/A", "<f4"),
237: ("|I h3|/A", "<f4"),
238: ("|I h4|/A", "<f4"),
239: ("|I h5|/A", "<f4"),
240: ("|I h6|/A", "<f4"),
241: ("|I h7|/A", "<f4"),
242: ("|E2|/V", "<f4"), 242: ("|E2|/V", "<f4"),
271: ("Phase(Z1) / deg", "<f4"), 271: ("Phase(Z1) / deg", "<f4"),
272: ("Phase(Z2) / deg", "<f4"), 272: ("Phase(Z2) / deg", "<f4"),
@@ -441,11 +489,18 @@ def read_VMP_modules(fileobj, read_module_data=True):
raise ValueError( raise ValueError(
"Found %r, expecting start of new VMP MODULE" % module_magic "Found %r, expecting start of new VMP MODULE" % module_magic
) )
VMPmodule_hdr = VMPmodule_hdr_v1
# Reading headers binary information
hdr_bytes = fileobj.read(VMPmodule_hdr.itemsize) hdr_bytes = fileobj.read(VMPmodule_hdr.itemsize)
if len(hdr_bytes) < VMPmodule_hdr.itemsize: if len(hdr_bytes) < VMPmodule_hdr.itemsize:
raise IOError("Unexpected end of file while reading module header") raise IOError("Unexpected end of file while reading module header")
# Checking if EC-Lab version is >= 11.50
if hdr_bytes[35:39] == b"\xff\xff\xff\xff":
VMPmodule_hdr = VMPmodule_hdr_v2
hdr_bytes += fileobj.read(VMPmodule_hdr_v2.itemsize - VMPmodule_hdr_v1.itemsize)
hdr = np.frombuffer(hdr_bytes, dtype=VMPmodule_hdr, count=1) hdr = np.frombuffer(hdr_bytes, dtype=VMPmodule_hdr, count=1)
hdr_dict = dict(((n, hdr[n][0]) for n in VMPmodule_hdr.names)) hdr_dict = dict(((n, hdr[n][0]) for n in VMPmodule_hdr.names))
hdr_dict["offset"] = fileobj.tell() hdr_dict["offset"] = fileobj.tell()
@@ -457,7 +512,11 @@ def read_VMP_modules(fileobj, read_module_data=True):
current module: %s current module: %s
length read: %d length read: %d
length expected: %d""" length expected: %d"""
% (hdr_dict["longname"], len(hdr_dict["data"]), hdr_dict["length"]) % (
hdr_dict["longname"],
len(hdr_dict["data"]),
hdr_dict["length"],
)
) )
yield hdr_dict yield hdr_dict
else: else:
@@ -495,6 +554,7 @@ class MPRfile:
raise ValueError("Invalid magic for .mpr file: %s" % magic) raise ValueError("Invalid magic for .mpr file: %s" % magic)
modules = list(read_VMP_modules(mpr_file)) modules = list(read_VMP_modules(mpr_file))
self.modules = modules self.modules = modules
(settings_mod,) = (m for m in modules if m["shortname"] == b"VMP Set ") (settings_mod,) = (m for m in modules if m["shortname"] == b"VMP Set ")
(data_module,) = (m for m in modules if m["shortname"] == b"VMP data ") (data_module,) = (m for m in modules if m["shortname"] == b"VMP data ")
@@ -505,15 +565,22 @@ class MPRfile:
n_columns = np.frombuffer(data_module["data"][4:5], dtype="u1").item() n_columns = np.frombuffer(data_module["data"][4:5], dtype="u1").item()
if data_module["version"] == 0: if data_module["version"] == 0:
column_types = np.frombuffer( # If EC-Lab version >= 11.50, column_types is [0 1 0 3 0 174...] instead of [1 3 174...]
data_module["data"][5:], dtype="u1", count=n_columns if np.frombuffer(data_module["data"][5:6], dtype="u1").item():
) column_types = np.frombuffer(data_module["data"][5:], dtype="u1", count=n_columns)
remaining_headers = data_module["data"][5 + n_columns:100] remaining_headers = data_module["data"][5 + n_columns:100]
main_data = data_module["data"][100:] main_data = data_module["data"][100:]
elif data_module["version"] in [2, 3]: else:
column_types = np.frombuffer( column_types = np.frombuffer(
data_module["data"][5:], dtype="<u2", count=n_columns data_module["data"][5:], dtype="u1", count=n_columns * 2
) )
column_types = column_types[1::2] # suppressing zeros in column types array
# remaining headers should be empty except for bytes 5 + n_columns * 2
# and 1006 which are sometimes == 1
remaining_headers = data_module["data"][6 + n_columns * 2:1006]
main_data = data_module["data"][1007:]
elif data_module["version"] in [2, 3]:
column_types = np.frombuffer(data_module["data"][5:], dtype="<u2", count=n_columns)
# There are bytes of data before the main array starts # There are bytes of data before the main array starts
if data_module["version"] == 3: if data_module["version"] == 3:
num_bytes_before = 406 # version 3 added `\x01` to the start num_bytes_before = 406 # version 3 added `\x01` to the start
@@ -542,7 +609,7 @@ class MPRfile:
if maybe_loop_module: if maybe_loop_module:
(loop_module,) = maybe_loop_module (loop_module,) = maybe_loop_module
if loop_module["version"] == 0: if loop_module["version"] == 0:
self.loop_index = np.fromstring(loop_module["data"][4:], dtype="<u4") self.loop_index = np.frombuffer(loop_module["data"][4:], dtype="<u4")
self.loop_index = np.trim_zeros(self.loop_index, "b") self.loop_index = np.trim_zeros(self.loop_index, "b")
else: else:
raise ValueError( raise ValueError(

View File

@@ -439,7 +439,8 @@ CREATE VIEW IF NOT EXISTS Capacity_View
def mdb_get_data_text(s3db, filename, table): def mdb_get_data_text(s3db, filename, table):
print("Reading %s..." % table) print("Reading %s..." % table)
insert_pattern = re.compile( insert_pattern = re.compile(
r'INSERT INTO "\w+" \([^)]+?\) VALUES \(("[^"]*"|[^")])+?\);\n', re.IGNORECASE r"""INSERT INTO "\w+" \([^)]+?\) VALUES (\((('[^']*')|"[^"]*"|[^')])+?\),?\s*)+;\n""",
re.IGNORECASE,
) )
try: try:
# Initialize values to avoid NameError in except clause # Initialize values to avoid NameError in except clause
@@ -570,13 +571,25 @@ def mdb_get_version(filename):
return version_tuple return version_tuple
def convert_arbin_to_sqlite(input_file, output_file): def convert_arbin_to_sqlite(input_file, output_file=None):
"""Read data from an Arbin .res data file and write to a sqlite file. """Read data from an Arbin .res data file and write to a sqlite file.
Any data currently in the sqlite file will be erased! Any data currently in an sqlite file at `output_file` will be erased!
Parameters:
input_file (str): The path to the Arbin .res file to read from.
output_file (str or None): The path to the sqlite file to write to; if None,
return a `sqlite3.Connection` into an in-memory database.
Returns:
None or sqlite3.Connection
""" """
arbin_version = mdb_get_version(input_file) arbin_version = mdb_get_version(input_file)
if output_file is None:
output_file = ":memory:"
s3db = sqlite3.connect(output_file) s3db = sqlite3.connect(output_file)
tables_to_convert = copy(mdb_tables) tables_to_convert = copy(mdb_tables)
@@ -601,6 +614,11 @@ def convert_arbin_to_sqlite(input_file, output_file):
print("Vacuuming database...") print("Vacuuming database...")
s3db.executescript("VACUUM; ANALYZE;") s3db.executescript("VACUUM; ANALYZE;")
if output_file == ":memory:":
return s3db
s3db.close()
def main(argv=None): def main(argv=None):
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(

View File

@@ -12,7 +12,7 @@ with open(os.path.join(os.path.dirname(__file__), "README.md")) as f:
setup( setup(
name="galvani", name="galvani",
version="0.3.0", version="0.4.1",
description="Open and process battery charger log data files", description="Open and process battery charger log data files",
long_description=readme, long_description=readme,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",

View File

@@ -53,6 +53,16 @@ def test_convert_Arbin_to_sqlite_function(testdata_dir, tmpdir, basename):
csr.fetchone() csr.fetchone()
@pytest.mark.parametrize("basename", ["arbin1", "UM34_Test005E"])
def test_convert_Arbin_to_sqlite_function_in_memory(testdata_dir, tmpdir, basename):
"""Convert an Arbin file to an in-memory SQLite database."""
res_file = os.path.join(testdata_dir, basename + ".res")
conn = None
with res2sqlite.convert_arbin_to_sqlite(res_file) as conn:
csr = conn.execute("SELECT * FROM Channel_Normal_Table;")
csr.fetchone()
@pytest.mark.skipif( @pytest.mark.skipif(
not have_mdbtools, reason="Reading the Arbin file requires MDBTools" not have_mdbtools, reason="Reading the Arbin file requires MDBTools"
) )

View File

@@ -9,7 +9,7 @@ import re
from datetime import date, datetime from datetime import date, datetime
import numpy as np import numpy as np
from numpy.testing import assert_array_almost_equal, assert_array_equal from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_allclose
import pytest import pytest
from galvani import BioLogic, MPTfile, MPRfile from galvani import BioLogic, MPTfile, MPRfile
@@ -210,6 +210,95 @@ def assert_MPR_matches_MPT(mpr, mpt, comments):
pass pass
def assert_MPR_matches_MPT_v2(mpr, mpt, comments):
"""
Asserts that the fields in the MPR.data ar the same as in the MPT.
Modified from assert_MPR_matches_MPT. Automatically converts dtype from MPT data
to dtype from MPR data before comparing the columns.
Special case for EIS_indicators: these fields are valid only at f<100kHz so their
values are replaced by -1 or 0 at high frequency in the MPT file, this is not the
case in the MPR data.
Parameters
----------
mpr : MPRfile
Data extracted with the MPRfile class.
mpt : np.array
Data extracted with MPTfile method.
Returns
-------
None.
"""
def assert_field_matches(fieldname):
EIS_quality_indicators = [
"THD Ewe/%",
"NSD Ewe/%",
"NSR Ewe/%",
"|Ewe h2|/V",
"|Ewe h3|/V",
"|Ewe h4|/V",
"|Ewe h5|/V",
"|Ewe h6|/V",
"|Ewe h7|/V",
"THD I/%",
"NSD I/%",
"NSR I/%",
"|I h2|/A",
"|I h3|/A",
"|I h4|/A",
"|I h5|/A",
"|I h6|/A",
"|I h7|/A",
]
if fieldname in EIS_quality_indicators: # EIS quality indicators only valid for f < 100kHz
index_inf_100k = np.where(mpr.data["freq/Hz"] < 100000)[0]
assert_allclose(
mpr.data[index_inf_100k][fieldname],
mpt[index_inf_100k][fieldname].astype(mpr.data[fieldname].dtype),
)
elif fieldname == "<Ewe>/V":
assert_allclose(
mpr.data[fieldname],
mpt["Ewe/V"].astype(mpr.data[fieldname].dtype),
)
elif fieldname == "<I>/mA":
assert_allclose(
mpr.data[fieldname],
mpt["I/mA"].astype(mpr.data[fieldname].dtype),
)
elif fieldname == "dq/mA.h":
assert_allclose(
mpr.data[fieldname],
mpt["dQ/mA.h"].astype(mpr.data[fieldname].dtype),
)
else:
assert_allclose(
mpr.data[fieldname],
mpt[fieldname].astype(mpr.data[fieldname].dtype),
)
def assert_field_exact(fieldname):
if fieldname in mpr.dtype.fields:
assert_array_equal(mpr.data[fieldname], mpt[fieldname])
for key in mpr.flags_dict.keys():
assert_array_equal(mpr.get_flag(key), mpt[key])
for d in mpr.dtype.descr[1:]:
assert_field_matches(d[0])
try:
assert timestamp_from_comments(comments) == mpr.timestamp.replace(microsecond=0)
except AttributeError:
pass
@pytest.mark.parametrize( @pytest.mark.parametrize(
"basename", "basename",
[ [
@@ -252,3 +341,20 @@ def test_MPR6_matches_MPT6(testdata_dir):
mpt, comments = MPTfile(os.path.join(testdata_dir, "bio_logic6.mpt")) mpt, comments = MPTfile(os.path.join(testdata_dir, "bio_logic6.mpt"))
mpr.data = mpr.data[:958] # .mpt file is incomplete mpr.data = mpr.data[:958] # .mpt file is incomplete
assert_MPR_matches_MPT(mpr, mpt, comments) assert_MPR_matches_MPT(mpr, mpt, comments)
@pytest.mark.parametrize(
"basename_v1150",
["v1150_CA", "v1150_CP", "v1150_GCPL", "v1150_GEIS", "v1150_MB", "v1150_OCV", "v1150_PEIS"],
)
def test_MPR_matches_MPT_v1150(testdata_dir, basename_v1150):
"""Check the MPR parser against the MPT parser.
Load a binary .mpr file and a text .mpt file which should contain
exactly the same data. Check that the loaded data actually match.
"""
binpath = os.path.join(testdata_dir, "v1150", basename_v1150 + ".mpr")
txtpath = os.path.join(testdata_dir, "v1150", basename_v1150 + ".mpt")
mpr = MPRfile(binpath)
mpt, comments = MPTfile(txtpath, encoding="latin1")
assert_MPR_matches_MPT_v2(mpr, mpt, comments)

BIN
tests/testdata/v1150/v1150_CA.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CA.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CP.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CP.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GCPL.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GCPL.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GEIS.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GEIS.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_MB.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_MB.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_OCV.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_OCV.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_PEIS.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_PEIS.mpt LFS vendored Normal file

Binary file not shown.