93 Commits

Author SHA1 Message Date
d4a5f444b1 Merge branch 'JhonFlash3008-loop-from-file'
Closes https://codeberg.org/echemdata/galvani/pulls/124

See also:
8a9d475222
8a9d475222
2025-07-30 15:26:14 +03:00
d81bf829bb Skip tests where the data file is missing 2025-07-30 15:24:22 +03:00
d77aa1555b Refactor tests 2025-07-30 15:24:22 +03:00
Jonathan Schillings
0d684af470 Add loop_from_file and timestamp_from_file functions
to extract loop_index and timestamp from the temporary _LOOP.txt and .mpl files during MPRfile initialization
Added unit tests but cannot upload test files due to LFS quota exceeded

Edited by Chris Kerr to fix flake8 warnings and resolve my comments from
the original PR https://github.com/echemdata/galvani/pull/102
2025-07-30 15:24:22 +03:00
baec8934b8 Merge remote-tracking branch 'cb-ml-evs/ml-evs/col-182-and-warning-only-mode'
Closes PR https://codeberg.org/echemdata/galvani/pulls/123
See also GitHub PR https://github.com/echemdata/galvani/pull/124
2025-07-30 14:36:59 +03:00
Matthew Evans
ccaa66b206 Convert to np.dtype in test 2025-06-13 18:24:42 +01:00
Matthew Evans
a59f263c2b Revert to defaulting to raising an error on unknown cols 2025-06-13 18:23:24 +01:00
Matthew Evans
30d6098aa0 Linting 2025-06-13 18:16:00 +01:00
Matthew Evans
2c90a2b038 Temporarily enable the new feature by default 2025-06-13 16:45:58 +01:00
Matthew Evans
5a207dbf5e Add guard for combinatorially exploring more than 3 unknown column data types 2025-06-13 16:29:21 +01:00
Matthew Evans
7964dc85db Add mode to attempt to read files with unknown columns and only warn 2025-06-13 16:22:30 +01:00
Matthew Evans
569a5f2a9c Add step time/s field for column 182 2025-06-13 15:48:19 +01:00
b6143e4b05 Merge branch 'move-to-codeberg' 2025-03-23 14:59:50 +02:00
4efec58374 Remove warning about Git LFS bandwidth limits 2025-03-23 14:51:52 +02:00
627387f9c4 Update URLs to point to CodeBerg repo 2025-03-23 14:51:52 +02:00
12b4badc31 Merge remote-tracking branch 'github/master' 2025-03-23 14:40:23 +02:00
5ed03ed20c Bump version to 0.5.0 2025-03-23 08:55:44 +02:00
Matthew Evans
c8e5bb12b8 Merge pull request #122 from echemdata/ml-evs/fix-ci
Pin and update release downloader action
2025-03-22 15:56:57 +00:00
Matthew Evans
1d913dd2f1 Pin and update release downloader action 2025-03-22 13:47:42 +00:00
Matthew Evans
3c1446ff07 Merge pull request #119 from d-cogswell/master
Fix deprecated numpy aliases which were removed in 2.0.0
2024-07-31 15:54:27 +01:00
Dan Cogswell
e18a21ffbc Reverses 79e3df0 which pins numpy version. 2024-07-31 10:18:47 -04:00
Dan Cogswell
260ad72a6e Fix deprecated numpy aliases which were removed in numpy version 2.0.0. 2024-07-30 10:55:48 -04:00
Matthew Evans
7d264999db Merge pull request #118 from echemdata/ml-evs/lfs
LFS workaround using archived releases in CI
2024-07-12 15:29:02 +01:00
Matthew Evans
1e53de56ef LFS note formatting and location in README 2024-07-12 14:33:53 +01:00
Matthew Evans
f44851ec37 Add flake8 skip 2024-07-12 14:31:15 +01:00
Matthew Evans
3b5dc48fc6 Add LFS warning note 2024-07-12 14:31:14 +01:00
Matthew Evans
56bebfe498 Replace failing lfs caching with downloading test files from release tarballs 2024-07-12 14:31:11 +01:00
Matthew Evans
d33c6f7561 Merge pull request #117 from echemdata/ml-evs/pin-numpy
Add upper numpy pin
2024-07-12 13:20:03 +01:00
Matthew Evans
79e3df0ed9 Add upper numpy pin 2024-07-12 12:45:02 +01:00
3c904db04e Merge pull request #105 from echemdata/ml-evs/arbin-in-memory
Optionally read Arbin into in-memory sqlite without temporary file
2024-03-03 10:32:30 +02:00
Matthew Evans
fbc90fc961 Update tests/test_Arbin.py
Co-authored-by: Chris Kerr <chris.kerr@mykolab.ch>
2024-03-02 18:13:40 +01:00
545a82ec35 Bump version to 0.4.1
I forgot to update the version before tagging 0.4.0 so I will have to
tag a 0.4.1 release instead.
2024-03-02 16:29:59 +02:00
7c37ea306b Merge pull request #107 from echemdata/ml-evs/analog-in-fix
Add `Analog IN <n>/V` columns to map
2024-03-02 16:20:19 +02:00
cd3eaae2c1 Merge pull request #103 from echemdata/ml-evs/preparing-release
Refresh README in preparation for release
2024-03-02 15:46:55 +02:00
Matthew Evans
a9be96b5c2 Fix column name and add explanation 2024-02-29 09:40:54 +00:00
Matthew Evans
0c2ecd42ca Duplicate 'ANALOG IN 1/V' to allow reading 2024-02-26 11:44:26 +00:00
Matthew Evans
a845731131 Optionally read Arbin into in-memory sqlite without temporary file 2024-02-12 10:55:52 +00:00
Matthew Evans
6d2a5b31fb Refresh the README with installation instructions and an arbin snippet 2024-02-12 10:39:09 +00:00
1fd9f8454a Merge pull request #97 from chatcannon/JhonFlash-master
Add support for EC-Lab v11.50

Rebased from #95 by @JhonFlash3008
2024-02-06 21:43:54 +02:00
f0177f2470 Merge pull request #101 from echemdata/ml-evs/attempt-to-cache-lfs
Attempt to cache LFS in GH actions
2024-02-06 21:42:10 +02:00
Matthew Evans
ea50999349 Bump setup-python to v5 2024-02-03 21:23:31 +01:00
Matthew Evans
88d1fc3a71 Attempt to cache LFS in GH actions 2024-02-03 21:15:10 +01:00
4971f2b550 Apply review comments 2024-02-03 14:24:03 +02:00
5cdc620f16 Fix flake8 lint 2024-02-03 14:00:16 +02:00
Jonathan Schillings
7a6ac1c542 added tests for v11.50 2024-02-03 13:53:23 +02:00
46f296f61f Merge branch 'master' into JhonFlash-master 2024-02-03 13:51:43 +02:00
aa0aee6128 Merge pull request #99 from chatcannon/mdbtools-1-0
Update regular expression for mdbtools 1.0 output
2024-02-03 13:47:06 +02:00
dbd01957db Use newer Ubuntu image for CI tests
We no longer need to use an old Ubuntu image with old mdbtools version.
2024-01-20 23:41:43 +02:00
13957160f8 Update regular expression for mdbtools 1.0 output
The output formatting has changed - it now puts multiple data rows in a
single INSERT statement, and also changes the quoting of text data.
2024-01-20 23:39:41 +02:00
0267b8b59f Bump version to 0.3.0 2024-01-20 22:57:45 +02:00
5448af7e77 Merge pull request #98 from chatcannon/col-27-ewe-ece
add support for column 27: E_we-E_ce/V
2024-01-20 22:35:12 +02:00
Ilka Schulz
9a61eb35d1 add test for column 27 (E_we - E_ce) 2024-01-20 22:34:13 +02:00
0b5b5b8ea5 Merge branch 'master' into col-27-ewe-ece 2024-01-20 22:31:12 +02:00
jschilli
77d56290d4 Added support for v11.50 :
Few modifications in the VMPdata_dtype_from_colIDs
Added new headers VMPmodule_hdr_v2
Modified MPRfile initialization

Includes squashed linting fixes by @ml-evs
2024-01-20 22:24:09 +02:00
daf85d59cf Merge pull request #96 from chatcannon/black
Format with black
2024-01-20 22:20:41 +02:00
Matthew Evans
6427ef4ded Reapply linting fixes 2024-01-20 20:15:31 +00:00
16961b8169 Reformatted remaining files with black 23.12.1 2024-01-20 21:57:49 +02:00
1cd5bd6239 Reformatted test scripts with black 23.12.1 2024-01-20 21:57:31 +02:00
239db97c69 Reformatted res2sqlite.py with black 23.12.1 2024-01-20 21:57:14 +02:00
dee8af3a86 Reformatted BioLogic.py with black 23.12.1 2024-01-20 21:45:28 +02:00
31416533d8 Merge pull request #88 from Paulemeister/master
Add ID 505 and 509 from EC-Lab
2023-10-26 09:46:23 +03:00
b580ee2d9f Merge pull request #90 from ml-evs/ml-evs/add_gh_actions
Add tox-gh based CI
2023-10-26 09:45:47 +03:00
Matthew Evans
28e532c860 Pull lfs in CI 2023-08-18 10:26:00 +01:00
Matthew Evans
a31a07adb2 Add copyright info to CI config 2023-08-18 10:22:20 +01:00
Matthew Evans
575e3a5bba Linting fix 2023-08-18 10:21:03 +01:00
Matthew Evans
aa48c6d60f Remove 3.6 and 3.7 support in CI 2023-08-18 10:21:02 +01:00
Matthew Evans
0f0c281fa2 Add tox-gh based CI 2023-08-18 10:20:58 +01:00
Paul Budden
8ce4eb0ccf Added ID 505 and 509 from EC-Lab, according to the export to Text Dialog, assuming they are ordered by ID 2023-07-21 12:22:40 +02:00
4bca2ac89c Merge pull request #86 from whs92/master
Fixed syntax error typo
2022-12-31 10:10:12 +02:00
will Smith
ab65d28f38 fixed colon error 2022-12-30 18:25:15 +01:00
9f51925612 Merge pull request #75 from chatcannon/yuyu-step-time
Add "step time/s" column data type
2022-11-30 18:52:35 +02:00
1025923aac Merge branch 'master' into yuyu-step-time 2022-11-30 18:51:25 +02:00
e5a1b847b4 Merge pull request #71 from GhostDeini/patch-1
Add more column types to BioLogic.py
2022-11-30 18:44:11 +02:00
Ilka Schulz
fec3a22548 add support for column 27: E_we-E_ce/V (fix #74) 2022-11-17 09:13:01 +01:00
e1ff99a559 Update test precision for the new data files 2022-09-10 22:33:55 +03:00
0ffdd2665e Improve MPT parsing for the new test data file 2022-09-10 22:33:29 +03:00
54e5765264 Add test data provided by yuyuchen0821 2022-09-10 22:05:46 +03:00
陳致諭(Chihyu Chen#5570)
2e7437c7ca Add Column 438 'Unknown' to parser 2022-09-10 17:38:06 +03:00
GhostDeini
32ea152ccf Update BioLogic.py
Added "control/mA", "Q charge/discharge/mA.h", "step time/s", "Q charge/mA.h", "Q discharge/mA.h", "Efficiency/%", "Capacity/mA.h" to possible fieldnames in fieldname_to_dtype(fieldname). Also in VMPdata_colID_dtype_map.
2022-05-30 16:31:24 +02:00
d6d2125d69 Merge pull request #68 from chatcannon/add-ewe-column
Add Column 174 'Ewe/V'
2022-01-18 18:49:29 +02:00
c1e5d92ed0 Add column 174 'Ewe/V' to parser
Suggested by @Etruria89 to fix #67
2022-01-15 08:22:06 +02:00
b63abc4516 Add a new MPR test file which contains column 174 'Ewe/V' 2022-01-15 08:20:38 +02:00
c02a871c35 Merge pull request #58 from chatcannon/testdata-lfs
Store the test data with git-lfs
2022-01-15 08:14:07 +02:00
ad39747e5c Add .license files for the testdata files sent in by other people 2021-08-31 18:43:16 +03:00
f1fbcbec44 Set REUSE metadata for testdata files with dep5 file 2021-08-31 18:33:52 +03:00
4aea136d50 Add SPDX metadata to .gitattributes 2021-08-31 18:25:12 +03:00
b9a8afa989 Add test data file for the Rapp/Ohm column ID 2021-08-31 18:21:35 +03:00
a3c742e53f Merge branch 'master' into testdata-lfs 2021-08-31 18:21:01 +03:00
dcd4315421 Merge branch 'master' into testdata-lfs 2021-04-25 20:30:16 +03:00
8d317435f6 Remove get_testdata.sh
This file is no longer needed, because the test data are saved
in the repo with git-lfs.
2020-11-07 17:54:08 +02:00
093cde0b62 Add all test data files to the repo
Store the files with git-lfs to avoid making the git history
excessively large.
2020-11-07 17:53:00 +02:00
8d0e2a4400 Store data files with git-lfs 2020-11-07 17:52:41 +02:00
a60caa41c5 Do not ignore testdata files 2020-11-07 17:51:27 +02:00
59 changed files with 1415 additions and 462 deletions

8
.gitattributes vendored Normal file
View File

@@ -0,0 +1,8 @@
# SPDX-FileCopyrightText: 2021 Christopher Kerr <chris.kerr@mykolab.ch>
# SPDX-License-Identifier: CC0-1.0
# Arbin data files
*.res filter=lfs diff=lfs merge=lfs -text
# Bio-Logic data files
*.mpr filter=lfs diff=lfs merge=lfs -text
*.mpt filter=lfs diff=lfs merge=lfs -text

71
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
# SPDX-FileCopyrightText: 2013-2020 Christopher Kerr, "bcolsen"
# SPDX-License-Identifier: GPL-3.0-or-later
name: CI tests
on:
pull_request:
push:
branches:
- master
concurrency:
# cancels running checks on new pushes
group: check-${{ github.ref }}
cancel-in-progress: true
jobs:
pytest:
name: Run Python unit tests
runs-on: ubuntu-22.04
strategy:
fail-fast: false
max-parallel: 6
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11']
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
lfs: false
# Due to limited LFS bandwidth, it is preferable to download
# test files from the last release.
#
# This does mean that testing new LFS files in the CI is tricky;
# care should be taken to also test new files locally first
# Tests missing these files in the CI should still fail.
- name: Download static files from last release for testing
uses: robinraju/release-downloader@v1.12
with:
latest: true
tarBall: true
out-file-path: /home/runner/work/last-release
extract: true
- name: Copy test files from static downloaded release
run: |
cp -r /home/runner/work/last-release/*/tests/testdata tests
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install MDBTools OS dependency
run: |
sudo apt install -y mdbtools
# tox-gh workflow following instructions at https://github.com/tox-dev/tox-gh
- name: Install tox
run: python -m pip install tox-gh
- name: Setup tests
run: |
tox -vv --notest
- name: Run all tests
run: |-
tox --skip-pkg-install

10
.gitignore vendored
View File

@@ -39,5 +39,11 @@ nosetests.xml
.project
.pydevproject
# Data for testing
testdata
# Compressed files used to transfer test data
*.gz
*.bz2
*.xz
*.zip
*.tar
*.tgz
*.tbz2

View File

@@ -1,10 +1,8 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: Galvani
Upstream-Contact: Christopher Kerr <chris.kerr@mykolab.ch>
Source: https://github.com/echemdata/galvani
Source: https://codeberg.org/echemdata/galvani
# Sample paragraph, commented out:
#
# Files: src/*
# Copyright: $YEAR $NAME <$CONTACT>
# License: ...
Files: tests/testdata/*
Copyright: 2010-2014 Christopher Kerr <chris.kerr@mykolab.ch>
License: CC-BY-4.0

View File

@@ -6,7 +6,6 @@ cache:
directories:
- .tox
- .pytest_cache
- tests/testdata
python:
- "3.6"
- "3.7"
@@ -14,5 +13,4 @@ python:
- "3.9"
install:
- pip install tox-travis
- sh get_testdata.sh
script: tox

156
LICENSES/CC-BY-4.0.txt Normal file
View File

@@ -0,0 +1,156 @@
Creative Commons Attribution 4.0 International
Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. More considerations for licensors.
Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensors permission is not necessary for any reasonfor example, because of any applicable exception or limitation to copyrightthen that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public.
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
Section 1 Definitions.
a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
Section 2 Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part; and
B. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
3. Term. The term of this Public License is specified in Section 6(a).
4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
5. Downstream recipients.
A. Offer from the Licensor Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
B. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this Public License.
3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.
Section 3 License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified form), You must:
A. retain the following if it is supplied by the Licensor with the Licensed Material:
i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License.
Section 4 Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;
b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
Section 5 Disclaimer of Warranties and Limitation of Liability.
a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.
b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
Section 6 Term and Termination.
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
2. upon express reinstatement by the Licensor.
c. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
d. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
Section 7 Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
Section 8 Interpretation.
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
Creative Commons may be contacted at creativecommons.org.

View File

@@ -7,21 +7,60 @@ SPDX-FileCopyrightText: 2013-2020 Christopher Kerr, Peter Attia
SPDX-License-Identifier: GPL-3.0-or-later
-->
Read proprietary file formats from electrochemical test stations
Read proprietary file formats from electrochemical test stations.
## Bio-Logic .mpr files ##
# Usage
## Bio-Logic .mpr files
Use the `MPRfile` class from BioLogic.py (exported in the main package)
````
```python
from galvani import BioLogic
import pandas as pd
mpr_file = BioLogic.MPRfile('test.mpr')
df = pd.DataFrame(mpr_file.data)
````
```
## Arbin .res files ##
## Arbin .res files
Use the res2sqlite.py script to convert the .res file to a sqlite3 database
with the same schema.
Use the `./galvani/res2sqlite.py` script to convert the .res file to a sqlite3 database with the same schema, which can then be interrogated with external tools or directly in Python.
For example, to extract the data into a pandas DataFrame (will need to be installed separately):
```python
import sqlite3
import pandas as pd
from galvani.res2sqlite import convert_arbin_to_sqlite
convert_arbin_to_sqlite("input.res", "output.sqlite")
with sqlite3.connect("output.sqlite") as db:
df = pd.read_sql(sql="select * from Channel_Normal_Table", con=db)
```
This functionality requires [MDBTools](https://github.com/mdbtools/mdbtools) to be installed on the local system.
# Installation
The latest galvani releases can be installed from [PyPI](https://pypi.org/project/galvani/) via
```shell
pip install galvani
```
The latest development version can be installed with `pip` directly from GitHub:
```shell
pip install git+https://codeberg.org/echemdata/galvani
```
## Development installation and contributing
If you wish to contribute to galvani, please clone the repository and install the testing dependencies:
```shell
git clone git@codeberg.org:echemdata/galvani
cd galvani
pip install -e .\[tests\]
```
Code can be contributed back via [pull requests](https://codeberg.org/echemdata/galvani/pulls) and new features or bugs can be discussed in the [issue tracker](https://codeberg.org/echemdata/galvani/issues).

View File

@@ -5,51 +5,120 @@
#
# SPDX-License-Identifier: GPL-3.0-or-later
__all__ = ['MPTfileCSV', 'MPTfile']
__all__ = ["MPTfileCSV", "MPTfile"]
import re
import csv
from os import SEEK_SET
import os.path
import time
from datetime import date, datetime, timedelta
from collections import defaultdict, OrderedDict
import warnings
import numpy as np
UNKNOWN_COLUMN_TYPE_HIERARCHY = ("<f8", "<f4", "<u4", "<u2", "<u1")
def fieldname_to_dtype(fieldname):
"""Converts a column header from the MPT file into a tuple of
canonical name and appropriate numpy dtype"""
if fieldname == 'mode':
return ('mode', np.uint8)
elif fieldname in ("ox/red", "error", "control changes", "Ns changes",
"counter inc."):
if fieldname == "mode":
return ("mode", np.uint8)
elif fieldname in (
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
):
return (fieldname, np.bool_)
elif fieldname in ("time/s", "P/W", "(Q-Qo)/mA.h", "x", "control/V",
"control/V/mA", "(Q-Qo)/C", "dQ/C", "freq/Hz",
"|Ewe|/V", "|I|/A", "Phase(Z)/deg", "|Z|/Ohm",
"Re(Z)/Ohm", "-Im(Z)/Ohm"):
return (fieldname, np.float_)
elif fieldname in ("cycle number", "I Range", "Ns", "half cycle"):
elif fieldname in (
"time/s",
"P/W",
"(Q-Qo)/mA.h",
"x",
"control/V",
"control/mA",
"control/V/mA",
"(Q-Qo)/C",
"dQ/C",
"freq/Hz",
"|Ewe|/V",
"|I|/A",
"Phase(Z)/deg",
"|Z|/Ohm",
"Re(Z)/Ohm",
"-Im(Z)/Ohm",
"Re(M)",
"Im(M)",
"|M|",
"Re(Permittivity)",
"Im(Permittivity)",
"|Permittivity|",
"Tan(Delta)",
):
return (fieldname, np.float64)
elif fieldname in (
"Q charge/discharge/mA.h",
"step time/s",
"Q charge/mA.h",
"Q discharge/mA.h",
"Temperature/°C",
"Efficiency/%",
"Capacity/mA.h",
):
return (fieldname, np.float64)
elif fieldname in ("cycle number", "I Range", "Ns", "half cycle", "z cycle"):
return (fieldname, np.int_)
elif fieldname in ("dq/mA.h", "dQ/mA.h"):
return ("dQ/mA.h", np.float_)
return ("dQ/mA.h", np.float64)
elif fieldname in ("I/mA", "<I>/mA"):
return ("I/mA", np.float_)
elif fieldname in ("Ewe/V", "<Ewe>/V"):
return ("Ewe/V", np.float_)
return ("I/mA", np.float64)
elif fieldname in ("Ewe/V", "<Ewe>/V", "Ecell/V", "<Ewe/V>"):
return ("Ewe/V", np.float64)
elif fieldname.endswith(
(
"/s",
"/Hz",
"/deg",
"/W",
"/mW",
"/W.h",
"/mW.h",
"/A",
"/mA",
"/A.h",
"/mA.h",
"/V",
"/mV",
"/F",
"/mF",
"/uF",
"/µF",
"/nF",
"/C",
"/Ohm",
"/Ohm-1",
"/Ohm.cm",
"/mS/cm",
"/%",
)
):
return (fieldname, np.float64)
else:
raise ValueError("Invalid column header: %s" % fieldname)
def comma_converter(float_text):
"""Convert text to float whether the decimal point is '.' or ','"""
trans_table = bytes.maketrans(b',', b'.')
trans_table = bytes.maketrans(b",", b".")
return float(float_text.translate(trans_table))
def MPTfile(file_or_path, encoding='ascii'):
def MPTfile(file_or_path, encoding="ascii"):
"""Opens .mpt files as numpy record arrays
Checks for the correct headings, skips any comments and returns a
@@ -57,16 +126,15 @@ def MPTfile(file_or_path, encoding='ascii'):
"""
if isinstance(file_or_path, str):
mpt_file = open(file_or_path, 'rb')
mpt_file = open(file_or_path, "rb")
else:
mpt_file = file_or_path
magic = next(mpt_file)
if magic != b'EC-Lab ASCII FILE\r\n':
if magic not in (b"EC-Lab ASCII FILE\r\n", b"BT-Lab ASCII FILE\r\n"):
raise ValueError("Bad first line for EC-Lab file: '%s'" % magic)
nb_headers_match = re.match(rb'Nb header lines : (\d+)\s*$',
next(mpt_file))
nb_headers_match = re.match(rb"Nb header lines : (\d+)\s*$", next(mpt_file))
nb_headers = int(nb_headers_match.group(1))
if nb_headers < 3:
raise ValueError("Too few header lines: %d" % nb_headers)
@@ -75,14 +143,12 @@ def MPTfile(file_or_path, encoding='ascii'):
# make three lines. Every additional line is a comment line.
comments = [next(mpt_file) for i in range(nb_headers - 3)]
fieldnames = next(mpt_file).decode(encoding).strip().split('\t')
fieldnames = next(mpt_file).decode(encoding).strip().split("\t")
record_type = np.dtype(list(map(fieldname_to_dtype, fieldnames)))
# Must be able to parse files where commas are used for decimal points
converter_dict = dict(((i, comma_converter)
for i in range(len(fieldnames))))
mpt_array = np.loadtxt(mpt_file, dtype=record_type,
converters=converter_dict)
converter_dict = dict(((i, comma_converter) for i in range(len(fieldnames))))
mpt_array = np.loadtxt(mpt_file, dtype=record_type, converters=converter_dict)
return mpt_array, comments
@@ -95,15 +161,15 @@ def MPTfileCSV(file_or_path):
"""
if isinstance(file_or_path, str):
mpt_file = open(file_or_path, 'r')
mpt_file = open(file_or_path, "r")
else:
mpt_file = file_or_path
magic = next(mpt_file)
if magic.rstrip() != 'EC-Lab ASCII FILE':
if magic.rstrip() != "EC-Lab ASCII FILE":
raise ValueError("Bad first line for EC-Lab file: '%s'" % magic)
nb_headers_match = re.match(r'Nb header lines : (\d+)\s*$', next(mpt_file))
nb_headers_match = re.match(r"Nb header lines : (\d+)\s*$", next(mpt_file))
nb_headers = int(nb_headers_match.group(1))
if nb_headers < 3:
raise ValueError("Too few header lines: %d" % nb_headers)
@@ -112,145 +178,243 @@ def MPTfileCSV(file_or_path):
# make three lines. Every additional line is a comment line.
comments = [next(mpt_file) for i in range(nb_headers - 3)]
mpt_csv = csv.DictReader(mpt_file, dialect='excel-tab')
mpt_csv = csv.DictReader(mpt_file, dialect="excel-tab")
expected_fieldnames = (
["mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V/mA", "Ewe/V", "dq/mA.h",
"P/W", "<I>/mA", "(Q-Qo)/mA.h", "x"],
['mode', 'ox/red', 'error', 'control changes', 'Ns changes',
'counter inc.', 'time/s', 'control/V', 'Ewe/V', 'dq/mA.h',
'<I>/mA', '(Q-Qo)/mA.h', 'x'],
["mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V", "Ewe/V", "I/mA",
"dQ/mA.h", "P/W"],
["mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V", "Ewe/V", "<I>/mA",
"dQ/mA.h", "P/W"])
[
"mode",
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
"time/s",
"control/V/mA",
"Ewe/V",
"dq/mA.h",
"P/W",
"<I>/mA",
"(Q-Qo)/mA.h",
"x",
],
[
"mode",
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
"time/s",
"control/V",
"Ewe/V",
"dq/mA.h",
"<I>/mA",
"(Q-Qo)/mA.h",
"x",
],
[
"mode",
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
"time/s",
"control/V",
"Ewe/V",
"I/mA",
"dQ/mA.h",
"P/W",
],
[
"mode",
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
"time/s",
"control/V",
"Ewe/V",
"<I>/mA",
"dQ/mA.h",
"P/W",
],
)
if mpt_csv.fieldnames not in expected_fieldnames:
raise ValueError("Unrecognised headers for MPT file format")
return mpt_csv, comments
VMPmodule_hdr = np.dtype([('shortname', 'S10'),
('longname', 'S25'),
('length', '<u4'),
('version', '<u4'),
('date', 'S8')])
VMPmodule_hdr_v1 = np.dtype(
[
("shortname", "S10"),
("longname", "S25"),
("length", "<u4"),
("version", "<u4"),
("date", "S8"),
]
)
VMPmodule_hdr_v2 = np.dtype(
[
("shortname", "S10"),
("longname", "S25"),
("max length", "<u4"),
("length", "<u4"),
("version", "<u4"),
("unknown2", "<u4"), # 10 for set, log and loop, 11 for data
("date", "S8"),
]
)
# Maps from colID to a tuple defining a numpy dtype
VMPdata_colID_dtype_map = {
4: ('time/s', '<f8'),
5: ('control/V/mA', '<f4'),
6: ('Ewe/V', '<f4'),
7: ('dQ/mA.h', '<f8'),
8: ('I/mA', '<f4'), # 8 is either I or <I> ??
9: ('Ece/V', '<f4'),
11: ('I/mA', '<f8'),
13: ('(Q-Qo)/mA.h', '<f8'),
16: ('Analog IN 1/V', '<f4'),
19: ('control/V', '<f4'),
20: ('control/mA', '<f4'),
23: ('dQ/mA.h', '<f8'), # Same as 7?
24: ('cycle number', '<f8'),
26: ('Rapp/Ohm', '<f4'),
32: ('freq/Hz', '<f4'),
33: ('|Ewe|/V', '<f4'),
34: ('|I|/A', '<f4'),
35: ('Phase(Z)/deg', '<f4'),
36: ('|Z|/Ohm', '<f4'),
37: ('Re(Z)/Ohm', '<f4'),
38: ('-Im(Z)/Ohm', '<f4'),
39: ('I Range', '<u2'),
69: ('R/Ohm', '<f4'),
70: ('P/W', '<f4'),
74: ('Energy/W.h', '<f8'),
75: ('Analog OUT/V', '<f4'),
76: ('<I>/mA', '<f4'),
77: ('<Ewe>/V', '<f4'),
78: ('Cs-2/µF-2', '<f4'),
96: ('|Ece|/V', '<f4'),
98: ('Phase(Zce)/deg', '<f4'),
99: ('|Zce|/Ohm', '<f4'),
100: ('Re(Zce)/Ohm', '<f4'),
101: ('-Im(Zce)/Ohm', '<f4'),
123: ('Energy charge/W.h', '<f8'),
124: ('Energy discharge/W.h', '<f8'),
125: ('Capacitance charge/µF', '<f8'),
126: ('Capacitance discharge/µF', '<f8'),
131: ('Ns', '<u2'),
163: ('|Estack|/V', '<f4'),
168: ('Rcmp/Ohm', '<f4'),
169: ('Cs/µF', '<f4'),
172: ('Cp/µF', '<f4'),
173: ('Cp-2/µF-2', '<f4'),
241: ('|E1|/V', '<f4'),
242: ('|E2|/V', '<f4'),
271: ('Phase(Z1) / deg', '<f4'),
272: ('Phase(Z2) / deg', '<f4'),
301: ('|Z1|/Ohm', '<f4'),
302: ('|Z2|/Ohm', '<f4'),
331: ('Re(Z1)/Ohm', '<f4'),
332: ('Re(Z2)/Ohm', '<f4'),
361: ('-Im(Z1)/Ohm', '<f4'),
362: ('-Im(Z2)/Ohm', '<f4'),
391: ('<E1>/V', '<f4'),
392: ('<E2>/V', '<f4'),
422: ('Phase(Zstack)/deg', '<f4'),
423: ('|Zstack|/Ohm', '<f4'),
424: ('Re(Zstack)/Ohm', '<f4'),
425: ('-Im(Zstack)/Ohm', '<f4'),
426: ('<Estack>/V', '<f4'),
430: ('Phase(Zwe-ce)/deg', '<f4'),
431: ('|Zwe-ce|/Ohm', '<f4'),
432: ('Re(Zwe-ce)/Ohm', '<f4'),
433: ('-Im(Zwe-ce)/Ohm', '<f4'),
434: ('(Q-Qo)/C', '<f4'),
435: ('dQ/C', '<f4'),
441: ('<Ecv>/V', '<f4'),
462: ('Temperature/°C', '<f4'),
467: ('Q charge/discharge/mA.h', '<f8'),
468: ('half cycle', '<u4'),
469: ('z cycle', '<u4'),
471: ('<Ece>/V', '<f4'),
473: ('THD Ewe/%', '<f4'),
474: ('THD I/%', '<f4'),
476: ('NSD Ewe/%', '<f4'),
477: ('NSD I/%', '<f4'),
479: ('NSR Ewe/%', '<f4'),
480: ('NSR I/%', '<f4'),
486: ('|Ewe h2|/V', '<f4'),
487: ('|Ewe h3|/V', '<f4'),
488: ('|Ewe h4|/V', '<f4'),
489: ('|Ewe h5|/V', '<f4'),
490: ('|Ewe h6|/V', '<f4'),
491: ('|Ewe h7|/V', '<f4'),
492: ('|I h2|/A', '<f4'),
493: ('|I h3|/A', '<f4'),
494: ('|I h4|/A', '<f4'),
495: ('|I h5|/A', '<f4'),
496: ('|I h6|/A', '<f4'),
497: ('|I h7|/A', '<f4'),
4: ("time/s", "<f8"),
5: ("control/V/mA", "<f4"),
6: ("Ewe/V", "<f4"),
7: ("dq/mA.h", "<f8"),
8: ("I/mA", "<f4"), # 8 is either I or <I> ??
9: ("Ece/V", "<f4"),
11: ("<I>/mA", "<f8"),
13: ("(Q-Qo)/mA.h", "<f8"),
16: ("Analog IN 1/V", "<f4"),
17: ("Analog IN 2/V", "<f4"), # Probably column 18 is Analog IN 3/V, if anyone hits this error in the future # noqa: E501
19: ("control/V", "<f4"),
20: ("control/mA", "<f4"),
23: ("dQ/mA.h", "<f8"), # Same as 7?
24: ("cycle number", "<f8"),
26: ("Rapp/Ohm", "<f4"),
27: ("Ewe-Ece/V", "<f4"),
32: ("freq/Hz", "<f4"),
33: ("|Ewe|/V", "<f4"),
34: ("|I|/A", "<f4"),
35: ("Phase(Z)/deg", "<f4"),
36: ("|Z|/Ohm", "<f4"),
37: ("Re(Z)/Ohm", "<f4"),
38: ("-Im(Z)/Ohm", "<f4"),
39: ("I Range", "<u2"),
69: ("R/Ohm", "<f4"),
70: ("P/W", "<f4"),
74: ("|Energy|/W.h", "<f8"),
75: ("Analog OUT/V", "<f4"),
76: ("<I>/mA", "<f4"),
77: ("<Ewe>/V", "<f4"),
78: ("Cs-2/µF-2", "<f4"),
96: ("|Ece|/V", "<f4"),
98: ("Phase(Zce)/deg", "<f4"),
99: ("|Zce|/Ohm", "<f4"),
100: ("Re(Zce)/Ohm", "<f4"),
101: ("-Im(Zce)/Ohm", "<f4"),
123: ("Energy charge/W.h", "<f8"),
124: ("Energy discharge/W.h", "<f8"),
125: ("Capacitance charge/µF", "<f8"),
126: ("Capacitance discharge/µF", "<f8"),
131: ("Ns", "<u2"),
163: ("|Estack|/V", "<f4"),
168: ("Rcmp/Ohm", "<f4"),
169: ("Cs/µF", "<f4"),
172: ("Cp/µF", "<f4"),
173: ("Cp-2/µF-2", "<f4"),
174: ("<Ewe>/V", "<f4"),
178: ("(Q-Qo)/C", "<f4"),
179: ("dQ/C", "<f4"),
182: ("step time/s", "<f8"),
211: ("Q charge/discharge/mA.h", "<f8"),
212: ("half cycle", "<u4"),
213: ("z cycle", "<u4"),
217: ("THD Ewe/%", "<f4"),
218: ("THD I/%", "<f4"),
220: ("NSD Ewe/%", "<f4"),
221: ("NSD I/%", "<f4"),
223: ("NSR Ewe/%", "<f4"),
224: ("NSR I/%", "<f4"),
230: ("|Ewe h2|/V", "<f4"),
231: ("|Ewe h3|/V", "<f4"),
232: ("|Ewe h4|/V", "<f4"),
233: ("|Ewe h5|/V", "<f4"),
234: ("|Ewe h6|/V", "<f4"),
235: ("|Ewe h7|/V", "<f4"),
236: ("|I h2|/A", "<f4"),
237: ("|I h3|/A", "<f4"),
238: ("|I h4|/A", "<f4"),
239: ("|I h5|/A", "<f4"),
240: ("|I h6|/A", "<f4"),
241: ("|I h7|/A", "<f4"),
242: ("|E2|/V", "<f4"),
271: ("Phase(Z1) / deg", "<f4"),
272: ("Phase(Z2) / deg", "<f4"),
301: ("|Z1|/Ohm", "<f4"),
302: ("|Z2|/Ohm", "<f4"),
331: ("Re(Z1)/Ohm", "<f4"),
332: ("Re(Z2)/Ohm", "<f4"),
361: ("-Im(Z1)/Ohm", "<f4"),
362: ("-Im(Z2)/Ohm", "<f4"),
391: ("<E1>/V", "<f4"),
392: ("<E2>/V", "<f4"),
422: ("Phase(Zstack)/deg", "<f4"),
423: ("|Zstack|/Ohm", "<f4"),
424: ("Re(Zstack)/Ohm", "<f4"),
425: ("-Im(Zstack)/Ohm", "<f4"),
426: ("<Estack>/V", "<f4"),
430: ("Phase(Zwe-ce)/deg", "<f4"),
431: ("|Zwe-ce|/Ohm", "<f4"),
432: ("Re(Zwe-ce)/Ohm", "<f4"),
433: ("-Im(Zwe-ce)/Ohm", "<f4"),
434: ("(Q-Qo)/C", "<f4"),
435: ("dQ/C", "<f4"),
438: ("step time/s", "<f8"),
441: ("<Ecv>/V", "<f4"),
462: ("Temperature/°C", "<f4"),
467: ("Q charge/discharge/mA.h", "<f8"),
468: ("half cycle", "<u4"),
469: ("z cycle", "<u4"),
471: ("<Ece>/V", "<f4"),
473: ("THD Ewe/%", "<f4"),
474: ("THD I/%", "<f4"),
476: ("NSD Ewe/%", "<f4"),
477: ("NSD I/%", "<f4"),
479: ("NSR Ewe/%", "<f4"),
480: ("NSR I/%", "<f4"),
486: ("|Ewe h2|/V", "<f4"),
487: ("|Ewe h3|/V", "<f4"),
488: ("|Ewe h4|/V", "<f4"),
489: ("|Ewe h5|/V", "<f4"),
490: ("|Ewe h6|/V", "<f4"),
491: ("|Ewe h7|/V", "<f4"),
492: ("|I h2|/A", "<f4"),
493: ("|I h3|/A", "<f4"),
494: ("|I h4|/A", "<f4"),
495: ("|I h5|/A", "<f4"),
496: ("|I h6|/A", "<f4"),
497: ("|I h7|/A", "<f4"),
498: ("Q charge/mA.h", "<f8"),
499: ("Q discharge/mA.h", "<f8"),
500: ("step time/s", "<f8"),
501: ("Efficiency/%", "<f8"),
502: ("Capacity/mA.h", "<f8"),
505: ("Rdc/Ohm", "<f4"),
509: ("Acir/Dcir Control", "<u1"),
}
# These column IDs define flags which are all stored packed in a single byte
# The values in the map are (name, bitmask, dtype)
VMPdata_colID_flag_map = {
1: ('mode', 0x03, np.uint8),
2: ('ox/red', 0x04, np.bool_),
3: ('error', 0x08, np.bool_),
21: ('control changes', 0x10, np.bool_),
31: ('Ns changes', 0x20, np.bool_),
65: ('counter inc.', 0x80, np.bool_),
1: ("mode", 0x03, np.uint8),
2: ("ox/red", 0x04, np.bool_),
3: ("error", 0x08, np.bool_),
21: ("control changes", 0x10, np.bool_),
31: ("Ns changes", 0x20, np.bool_),
65: ("counter inc.", 0x80, np.bool_),
}
def parse_BioLogic_date(date_text):
"""Parse a date from one of the various formats used by Bio-Logic files."""
date_formats = ['%m/%d/%y', '%m-%d-%y', '%m.%d.%y']
date_formats = ["%m/%d/%y", "%m-%d-%y", "%m.%d.%y"]
if isinstance(date_text, bytes):
date_string = date_text.decode('ascii')
date_string = date_text.decode("ascii")
else:
date_string = date_text
for date_format in date_formats:
@@ -261,23 +425,30 @@ def parse_BioLogic_date(date_text):
else:
break
else:
raise ValueError(f'Could not parse timestamp {date_string!r}'
f' with any of the formats {date_formats}')
raise ValueError(
f"Could not parse timestamp {date_string!r}"
f" with any of the formats {date_formats}"
)
return date(tm.tm_year, tm.tm_mon, tm.tm_mday)
def VMPdata_dtype_from_colIDs(colIDs):
def VMPdata_dtype_from_colIDs(colIDs, error_on_unknown_column: bool = True):
"""Get a numpy record type from a list of column ID numbers.
The binary layout of the data in the MPR file is described by the sequence
of column ID numbers in the file header. This function converts that
sequence into a numpy dtype which can then be used to load data from the
sequence into a list that can be used with numpy dtype load data from the
file with np.frombuffer().
Some column IDs refer to small values which are packed into a single byte.
The second return value is a dict describing the bit masks with which to
extract these columns from the flags byte.
If error_on_unknown_column is True, an error will be raised if an unknown
column ID is encountered. If it is False, a warning will be emited and attempts
will be made to read the column with a few different dtypes.
"""
type_list = []
field_name_counts = defaultdict(int)
@@ -289,9 +460,9 @@ def VMPdata_dtype_from_colIDs(colIDs):
# in the overall record is determined by the position of the first
# column ID of flag type. If there are several flags present,
# there is still only one 'flags' int
if 'flags' not in field_name_counts:
type_list.append(('flags', 'u1'))
field_name_counts['flags'] = 1
if "flags" not in field_name_counts:
type_list.append(("flags", "u1"))
field_name_counts["flags"] = 1
flag_name, flag_mask, flag_type = VMPdata_colID_flag_map[colID]
# TODO what happens if a flag colID has already been seen
# i.e. if flag_name is already present in flags_dict?
@@ -302,16 +473,24 @@ def VMPdata_dtype_from_colIDs(colIDs):
field_name_counts[field_name] += 1
count = field_name_counts[field_name]
if count > 1:
unique_field_name = '%s %d' % (field_name, count)
unique_field_name = "%s %d" % (field_name, count)
else:
unique_field_name = field_name
type_list.append((unique_field_name, field_type))
else:
raise NotImplementedError("Column ID {cid} after column {prev} "
"is unknown"
.format(cid=colID,
prev=type_list[-1][0]))
return np.dtype(type_list), flags_dict
if error_on_unknown_column:
raise NotImplementedError(
"Column ID {cid} after column {prev} is unknown".format(
cid=colID, prev=type_list[-1][0]
)
)
warnings.warn(
"Unknown column ID %d -- will attempt to read as common dtypes"
% colID
)
type_list.append(("unknown_colID_%d" % colID, UNKNOWN_COLUMN_TYPE_HIERARCHY[0]))
return type_list, flags_dict
def read_VMP_modules(fileobj, read_module_data=True):
@@ -321,36 +500,128 @@ def read_VMP_modules(fileobj, read_module_data=True):
N.B. the offset yielded is the offset to the start of the data i.e. after
the end of the header. The data runs from (offset) to (offset+length)"""
while True:
module_magic = fileobj.read(len(b'MODULE'))
module_magic = fileobj.read(len(b"MODULE"))
if len(module_magic) == 0: # end of file
break
elif module_magic != b'MODULE':
raise ValueError("Found %r, expecting start of new VMP MODULE"
% module_magic)
elif module_magic != b"MODULE":
raise ValueError(
"Found %r, expecting start of new VMP MODULE" % module_magic
)
VMPmodule_hdr = VMPmodule_hdr_v1
# Reading headers binary information
hdr_bytes = fileobj.read(VMPmodule_hdr.itemsize)
if len(hdr_bytes) < VMPmodule_hdr.itemsize:
raise IOError("Unexpected end of file while reading module header")
# Checking if EC-Lab version is >= 11.50
if hdr_bytes[35:39] == b"\xff\xff\xff\xff":
VMPmodule_hdr = VMPmodule_hdr_v2
hdr_bytes += fileobj.read(VMPmodule_hdr_v2.itemsize - VMPmodule_hdr_v1.itemsize)
hdr = np.frombuffer(hdr_bytes, dtype=VMPmodule_hdr, count=1)
hdr_dict = dict(((n, hdr[n][0]) for n in VMPmodule_hdr.names))
hdr_dict['offset'] = fileobj.tell()
hdr_dict["offset"] = fileobj.tell()
if read_module_data:
hdr_dict['data'] = fileobj.read(hdr_dict['length'])
if len(hdr_dict['data']) != hdr_dict['length']:
raise IOError("""Unexpected end of file while reading data
hdr_dict["data"] = fileobj.read(hdr_dict["length"])
if len(hdr_dict["data"]) != hdr_dict["length"]:
raise IOError(
"""Unexpected end of file while reading data
current module: %s
length read: %d
length expected: %d""" % (hdr_dict['longname'],
len(hdr_dict['data']),
hdr_dict['length']))
length expected: %d"""
% (
hdr_dict["longname"],
len(hdr_dict["data"]),
hdr_dict["length"],
)
)
yield hdr_dict
else:
yield hdr_dict
fileobj.seek(hdr_dict['offset'] + hdr_dict['length'], SEEK_SET)
fileobj.seek(hdr_dict["offset"] + hdr_dict["length"], SEEK_SET)
MPR_MAGIC = b'BIO-LOGIC MODULAR FILE\x1a'.ljust(48) + b'\x00\x00\x00\x00'
def loop_from_file(file: str, encoding: str = "latin1"):
"""
When an experiment is still running and it includes loops,
a _LOOP.txt file is temporarily created to progressively store the indexes of new loops.
This function reads the file and creates the loop_index array for MPRfile initialization.
Parameters
----------
file : str
Path of the loop file.
encoding : str, optional
Encoding of the text file. The default is "latin1".
Raises
------
ValueError
If the file does not start with "VMP EXPERIMENT LOOP INDEXES".
Returns
-------
loop_index : np.array
Indexes of data points that start a new loop.
"""
with open(file, "r", encoding=encoding) as f:
line = f.readline().strip()
if line != LOOP_MAGIC:
raise ValueError("Invalid magic for LOOP.txt file")
loop_index = np.array([int(line) for line in f], dtype="u4")
return loop_index
def timestamp_from_file(file: str, encoding: str = "latin1"):
"""
When an experiment is still running, a .mpl file is temporarily created to store
information that will be added in the log module and will be appended to the data
module in the .mpr file at the end of experiment.
This function reads the file and extracts the experimental starting date and time
as a timestamp for MPRfile initialization.
Parameters
----------
file : str
Path of the log file.
encoding : str, optional
Encoding of the text file. The default is "latin1".
Raises
------
ValueError
If the file does not start with "EC-Lab LOG FILE" or "BT-Lab LOG FILE".
Returns
-------
timestamp
Date and time of the start of data acquisition
"""
with open(file, "r", encoding=encoding) as f:
line = f.readline().strip()
if line not in LOG_MAGIC:
raise ValueError("Invalid magic for .mpl file")
log = f.read()
start = tuple(
map(
int,
re.findall(
r"Acquisition started on : (\d+)\/(\d+)\/(\d+) (\d+):(\d+):(\d+)\.(\d+)",
"".join(log),
)[0],
)
)
return datetime(
int(start[2]), start[0], start[1], start[3], start[4], start[5], start[6] * 1000
)
LOG_MAGIC = "EC-Lab LOG FILEBT-Lab LOG FILE"
LOOP_MAGIC = "VMP EXPERIMENT LOOP INDEXES"
MPR_MAGIC = b"BIO-LOGIC MODULAR FILE\x1a".ljust(48) + b"\x00\x00\x00\x00"
class MPRfile:
@@ -369,83 +640,157 @@ class MPRfile:
enddate - The date when the experiment finished
"""
def __init__(self, file_or_path):
def __init__(self, file_or_path, error_on_unknown_column: bool = True):
"""Pass an EC-lab .mpr file to be parsed.
Parameters:
file_or_path: Either the open file data or a path to it.
error_on_unknown_column: Whether or not to raise an error if an
unknown column ID is encountered. A warning will be emited and
the column will be added 'unknown_<colID>', with an attempt to read
it with a few different dtypes.
"""
self.loop_index = None
if isinstance(file_or_path, str):
mpr_file = open(file_or_path, 'rb')
mpr_file = open(file_or_path, "rb")
loop_file = file_or_path[:-4] + "_LOOP.txt" # loop file for running experiment
log_file = file_or_path[:-1] + "l" # log file for runnning experiment
else:
mpr_file = file_or_path
magic = mpr_file.read(len(MPR_MAGIC))
if magic != MPR_MAGIC:
raise ValueError('Invalid magic for .mpr file: %s' % magic)
raise ValueError("Invalid magic for .mpr file: %s" % magic)
modules = list(read_VMP_modules(mpr_file))
self.modules = modules
settings_mod, = (m for m in modules if m['shortname'] == b'VMP Set ')
data_module, = (m for m in modules if m['shortname'] == b'VMP data ')
maybe_loop_module = [m for m in modules if m['shortname'] == b'VMP loop ']
maybe_log_module = [m for m in modules if m['shortname'] == b'VMP LOG ']
(settings_mod,) = (m for m in modules if m["shortname"] == b"VMP Set ")
(data_module,) = (m for m in modules if m["shortname"] == b"VMP data ")
maybe_loop_module = [m for m in modules if m["shortname"] == b"VMP loop "]
maybe_log_module = [m for m in modules if m["shortname"] == b"VMP LOG "]
n_data_points = np.frombuffer(data_module['data'][:4], dtype='<u4')
n_columns = np.frombuffer(data_module['data'][4:5], dtype='u1').item()
n_data_points = np.frombuffer(data_module["data"][:4], dtype="<u4")
n_columns = np.frombuffer(data_module["data"][4:5], dtype="u1").item()
if data_module['version'] == 0:
column_types = np.frombuffer(data_module['data'][5:], dtype='u1',
count=n_columns)
remaining_headers = data_module['data'][5 + n_columns:100]
main_data = data_module['data'][100:]
elif data_module['version'] in [2, 3]:
column_types = np.frombuffer(data_module['data'][5:], dtype='<u2',
count=n_columns)
if data_module["version"] == 0:
# If EC-Lab version >= 11.50, column_types is [0 1 0 3 0 174...] instead of [1 3 174...]
if np.frombuffer(data_module["data"][5:6], dtype="u1").item():
column_types = np.frombuffer(data_module["data"][5:], dtype="u1", count=n_columns)
remaining_headers = data_module["data"][5 + n_columns:100]
main_data = data_module["data"][100:]
else:
column_types = np.frombuffer(
data_module["data"][5:], dtype="u1", count=n_columns * 2
)
column_types = column_types[1::2] # suppressing zeros in column types array
# remaining headers should be empty except for bytes 5 + n_columns * 2
# and 1006 which are sometimes == 1
remaining_headers = data_module["data"][6 + n_columns * 2:1006]
main_data = data_module["data"][1007:]
elif data_module["version"] in [2, 3]:
column_types = np.frombuffer(data_module["data"][5:], dtype="<u2", count=n_columns)
# There are bytes of data before the main array starts
if data_module['version'] == 3:
if data_module["version"] == 3:
num_bytes_before = 406 # version 3 added `\x01` to the start
else:
num_bytes_before = 405
remaining_headers = data_module['data'][5 + 2 * n_columns:405]
main_data = data_module['data'][num_bytes_before:]
remaining_headers = data_module["data"][5 + 2 * n_columns:405]
main_data = data_module["data"][num_bytes_before:]
else:
raise ValueError("Unrecognised version for data module: %d" %
data_module['version'])
raise ValueError(
"Unrecognised version for data module: %d" % data_module["version"]
)
assert(not any(remaining_headers))
assert not any(remaining_headers)
self.dtype, self.flags_dict = VMPdata_dtype_from_colIDs(column_types)
self.data = np.frombuffer(main_data, dtype=self.dtype)
assert(self.data.shape[0] == n_data_points)
dtypes, self.flags_dict = VMPdata_dtype_from_colIDs(
column_types, error_on_unknown_column=error_on_unknown_column
)
unknown_cols = []
# Iteratively work through the unknown columns and try to read them
if not error_on_unknown_column:
for col, _ in dtypes:
if col.startswith("unknown_colID"):
unknown_cols.append(col)
if len(unknown_cols) > 3:
raise RuntimeError(
"Too many unknown columns to attempt to read combinatorially: %s"
% unknown_cols
)
if unknown_cols:
# create a list of all possible combinations of dtypes
# for the unknown columns
from itertools import product
perms = product(UNKNOWN_COLUMN_TYPE_HIERARCHY, repeat=len(unknown_cols))
for perm in perms:
for unknown_col_ind, c in enumerate(unknown_cols):
for ind, (col, _) in enumerate(dtypes):
if c == col:
dtypes[ind] = (col, perm[unknown_col_ind])
try:
self.dtype = np.dtype(dtypes)
self.data = np.frombuffer(main_data, dtype=self.dtype)
break
except ValueError:
continue
else:
raise RuntimeError(
"Unable to read data for unknown columns %s with any of the common dtypes %s",
unknown_cols,
UNKNOWN_COLUMN_TYPE_HIERARCHY
)
else:
self.dtype = np.dtype(dtypes)
self.data = np.frombuffer(main_data, dtype=self.dtype)
assert self.data.shape[0] == n_data_points
# No idea what these 'column types' mean or even if they are actually
# column types at all
self.version = int(data_module['version'])
self.version = int(data_module["version"])
self.cols = column_types
self.npts = n_data_points
self.startdate = parse_BioLogic_date(settings_mod['date'])
self.startdate = parse_BioLogic_date(settings_mod["date"])
if maybe_loop_module:
loop_module, = maybe_loop_module
if loop_module['version'] == 0:
self.loop_index = np.fromstring(loop_module['data'][4:],
dtype='<u4')
self.loop_index = np.trim_zeros(self.loop_index, 'b')
(loop_module,) = maybe_loop_module
if loop_module["version"] == 0:
self.loop_index = np.frombuffer(loop_module["data"][4:], dtype="<u4")
self.loop_index = np.trim_zeros(self.loop_index, "b")
else:
raise ValueError("Unrecognised version for data module: %d" %
data_module['version'])
raise ValueError(
"Unrecognised version for data module: %d" % data_module["version"]
)
else:
if os.path.isfile(loop_file):
self.loop_index = loop_from_file(loop_file)
if self.loop_index[-1] < n_data_points:
self.loop_index = np.append(self.loop_index, n_data_points)
if maybe_log_module:
log_module, = maybe_log_module
self.enddate = parse_BioLogic_date(log_module['date'])
(log_module,) = maybe_log_module
self.enddate = parse_BioLogic_date(log_module["date"])
# There is a timestamp at either 465 or 469 bytes
# I can't find any reason why it is one or the other in any
# given file
ole_timestamp1 = np.frombuffer(log_module['data'][465:],
dtype='<f8', count=1)
ole_timestamp2 = np.frombuffer(log_module['data'][469:],
dtype='<f8', count=1)
ole_timestamp3 = np.frombuffer(log_module['data'][473:],
dtype='<f8', count=1)
ole_timestamp4 = np.frombuffer(log_module['data'][585:],
dtype='<f8', count=1)
ole_timestamp1 = np.frombuffer(
log_module["data"][465:], dtype="<f8", count=1
)
ole_timestamp2 = np.frombuffer(
log_module["data"][469:], dtype="<f8", count=1
)
ole_timestamp3 = np.frombuffer(
log_module["data"][473:], dtype="<f8", count=1
)
ole_timestamp4 = np.frombuffer(
log_module["data"][585:], dtype="<f8", count=1
)
if ole_timestamp1 > 40000 and ole_timestamp1 < 50000:
ole_timestamp = ole_timestamp1
@@ -463,14 +808,20 @@ class MPRfile:
ole_timedelta = timedelta(days=ole_timestamp[0])
self.timestamp = ole_base + ole_timedelta
if self.startdate != self.timestamp.date():
raise ValueError("Date mismatch:\n"
+ " Start date: %s\n" % self.startdate
+ " End date: %s\n" % self.enddate
+ " Timestamp: %s\n" % self.timestamp)
raise ValueError(
"Date mismatch:\n"
+ " Start date: %s\n" % self.startdate
+ " End date: %s\n" % self.enddate
+ " Timestamp: %s\n" % self.timestamp
)
else:
if os.path.isfile(log_file):
self.timestamp = timestamp_from_file(log_file)
self.enddate = None
def get_flag(self, flagname):
if flagname in self.flags_dict:
mask, dtype = self.flags_dict[flagname]
return np.array(self.data['flags'] & mask, dtype=dtype)
return np.array(self.data["flags"] & mask, dtype=dtype)
else:
raise AttributeError("Flag '%s' not present" % flagname)

View File

@@ -4,4 +4,4 @@
from .BioLogic import MPRfile, MPTfile
__all__ = ['MPRfile', 'MPTfile']
__all__ = ["MPRfile", "MPTfile"]

View File

@@ -16,43 +16,43 @@ from copy import copy
# $ mdb-schema <result.res> oracle
mdb_tables = [
'Version_Table',
'Global_Table',
'Resume_Table',
'Channel_Normal_Table',
'Channel_Statistic_Table',
'Auxiliary_Table',
'Event_Table',
'Smart_Battery_Info_Table',
'Smart_Battery_Data_Table',
"Version_Table",
"Global_Table",
"Resume_Table",
"Channel_Normal_Table",
"Channel_Statistic_Table",
"Auxiliary_Table",
"Event_Table",
"Smart_Battery_Info_Table",
"Smart_Battery_Data_Table",
]
mdb_5_23_tables = [
'MCell_Aci_Data_Table',
'Aux_Global_Data_Table',
'Smart_Battery_Clock_Stretch_Table',
"MCell_Aci_Data_Table",
"Aux_Global_Data_Table",
"Smart_Battery_Clock_Stretch_Table",
]
mdb_5_26_tables = [
'Can_BMS_Info_Table',
'Can_BMS_Data_Table',
"Can_BMS_Info_Table",
"Can_BMS_Data_Table",
]
mdb_tables_text = {
'Version_Table',
'Global_Table',
'Event_Table',
'Smart_Battery_Info_Table',
'Can_BMS_Info_Table',
"Version_Table",
"Global_Table",
"Event_Table",
"Smart_Battery_Info_Table",
"Can_BMS_Info_Table",
}
mdb_tables_numeric = {
'Resume_Table',
'Channel_Normal_Table',
'Channel_Statistic_Table',
'Auxiliary_Table',
'Smart_Battery_Data_Table',
'MCell_Aci_Data_Table',
'Aux_Global_Data_Table',
'Smart_Battery_Clock_Stretch_Table',
'Can_BMS_Data_Table',
"Resume_Table",
"Channel_Normal_Table",
"Channel_Statistic_Table",
"Auxiliary_Table",
"Smart_Battery_Data_Table",
"MCell_Aci_Data_Table",
"Aux_Global_Data_Table",
"Smart_Battery_Clock_Stretch_Table",
"Can_BMS_Data_Table",
}
mdb_create_scripts = {
@@ -191,7 +191,7 @@ CREATE TABLE Event_Table
Event_Type INTEGER,
Event_Describe TEXT
); """,
"Smart_Battery_Info_Table": """
"Smart_Battery_Info_Table": """
CREATE TABLE Smart_Battery_Info_Table
(
Test_ID INTEGER PRIMARY KEY REFERENCES Global_Table(Test_ID),
@@ -271,7 +271,7 @@ CREATE TABLE Smart_Battery_Data_Table
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
); """,
# The following tables are not present in version 1.14, but are in 5.23
'MCell_Aci_Data_Table': """
"MCell_Aci_Data_Table": """
CREATE TABLE MCell_Aci_Data_Table
(
Test_ID INTEGER,
@@ -285,7 +285,7 @@ CREATE TABLE MCell_Aci_Data_Table
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
);""",
'Aux_Global_Data_Table': """
"Aux_Global_Data_Table": """
CREATE TABLE Aux_Global_Data_Table
(
Channel_Index INTEGER,
@@ -295,7 +295,7 @@ CREATE TABLE Aux_Global_Data_Table
Unit TEXT,
PRIMARY KEY (Channel_Index, Auxiliary_Index, Data_Type)
);""",
'Smart_Battery_Clock_Stretch_Table': """
"Smart_Battery_Clock_Stretch_Table": """
CREATE TABLE Smart_Battery_Clock_Stretch_Table
(
Test_ID INTEGER,
@@ -344,7 +344,7 @@ CREATE TABLE Smart_Battery_Clock_Stretch_Table
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
);""",
# The following tables are not present in version 5.23, but are in 5.26
'Can_BMS_Info_Table': """
"Can_BMS_Info_Table": """
CREATE TABLE "Can_BMS_Info_Table"
(
Channel_Index INTEGER PRIMARY KEY,
@@ -352,7 +352,7 @@ CREATE TABLE "Can_BMS_Info_Table"
CAN_Configuration TEXT
);
""",
'Can_BMS_Data_Table': """
"Can_BMS_Data_Table": """
CREATE TABLE "Can_BMS_Data_Table"
(
Test_ID INTEGER,
@@ -371,7 +371,8 @@ mdb_create_indices = {
CREATE UNIQUE INDEX data_point_index ON Channel_Normal_Table (Test_ID, Data_Point);
CREATE INDEX voltage_index ON Channel_Normal_Table (Test_ID, Voltage);
CREATE INDEX test_time_index ON Channel_Normal_Table (Test_ID, Test_Time);
"""}
"""
}
helper_table_script = """
CREATE TEMPORARY TABLE capacity_helper(
@@ -438,17 +439,20 @@ CREATE VIEW IF NOT EXISTS Capacity_View
def mdb_get_data_text(s3db, filename, table):
print("Reading %s..." % table)
insert_pattern = re.compile(
r'INSERT INTO "\w+" \([^)]+?\) VALUES \(("[^"]*"|[^")])+?\);\n',
re.IGNORECASE
r"""INSERT INTO "\w+" \([^)]+?\) VALUES (\((('[^']*')|"[^"]*"|[^')])+?\),?\s*)+;\n""",
re.IGNORECASE,
)
try:
# Initialize values to avoid NameError in except clause
mdb_output = ''
mdb_output = ""
insert_match = None
with sp.Popen(['mdb-export', '-I', 'postgres', filename, table],
bufsize=-1, stdin=sp.DEVNULL, stdout=sp.PIPE,
universal_newlines=True) as mdb_sql:
with sp.Popen(
["mdb-export", "-I", "postgres", filename, table],
bufsize=-1,
stdin=sp.DEVNULL,
stdout=sp.PIPE,
universal_newlines=True,
) as mdb_sql:
mdb_output = mdb_sql.stdout.read()
while len(mdb_output) > 0:
insert_match = insert_pattern.match(mdb_output)
@@ -459,8 +463,10 @@ def mdb_get_data_text(s3db, filename, table):
except OSError as e:
if e.errno == 2:
raise RuntimeError('Could not locate the `mdb-export` executable. '
'Check that mdbtools is properly installed.')
raise RuntimeError(
"Could not locate the `mdb-export` executable. "
"Check that mdbtools is properly installed."
)
else:
raise
except BaseException:
@@ -475,14 +481,18 @@ def mdb_get_data_text(s3db, filename, table):
def mdb_get_data_numeric(s3db, filename, table):
print("Reading %s..." % table)
try:
with sp.Popen(['mdb-export', filename, table],
bufsize=-1, stdin=sp.DEVNULL, stdout=sp.PIPE,
universal_newlines=True) as mdb_sql:
with sp.Popen(
["mdb-export", filename, table],
bufsize=-1,
stdin=sp.DEVNULL,
stdout=sp.PIPE,
universal_newlines=True,
) as mdb_sql:
mdb_csv = csv.reader(mdb_sql.stdout)
mdb_headers = next(mdb_csv)
quoted_headers = ['"%s"' % h for h in mdb_headers]
joined_headers = ', '.join(quoted_headers)
joined_placemarks = ', '.join(['?' for h in mdb_headers])
joined_headers = ", ".join(quoted_headers)
joined_placemarks = ", ".join(["?" for h in mdb_headers])
insert_stmt = 'INSERT INTO "{0}" ({1}) VALUES ({2});'.format(
table,
joined_headers,
@@ -492,8 +502,10 @@ def mdb_get_data_numeric(s3db, filename, table):
s3db.commit()
except OSError as e:
if e.errno == 2:
raise RuntimeError('Could not locate the `mdb-export` executable. '
'Check that mdbtools is properly installed.')
raise RuntimeError(
"Could not locate the `mdb-export` executable. "
"Check that mdbtools is properly installed."
)
else:
raise
@@ -504,7 +516,9 @@ def mdb_get_data(s3db, filename, table):
elif table in mdb_tables_numeric:
mdb_get_data_numeric(s3db, filename, table)
else:
raise ValueError("'%s' is in neither mdb_tables_text nor mdb_tables_numeric" % table)
raise ValueError(
"'%s' is in neither mdb_tables_text nor mdb_tables_numeric" % table
)
def mdb_get_version(filename):
@@ -514,9 +528,13 @@ def mdb_get_version(filename):
"""
print("Reading version number...")
try:
with sp.Popen(['mdb-export', filename, 'Version_Table'],
bufsize=-1, stdin=sp.DEVNULL, stdout=sp.PIPE,
universal_newlines=True) as mdb_sql:
with sp.Popen(
["mdb-export", filename, "Version_Table"],
bufsize=-1,
stdin=sp.DEVNULL,
stdout=sp.PIPE,
universal_newlines=True,
) as mdb_sql:
mdb_csv = csv.reader(mdb_sql.stdout)
mdb_headers = next(mdb_csv)
mdb_values = next(mdb_csv)
@@ -525,33 +543,53 @@ def mdb_get_version(filename):
except StopIteration:
pass
else:
raise ValueError('Version_Table of %s lists multiple versions' % filename)
raise ValueError(
"Version_Table of %s lists multiple versions" % filename
)
except OSError as e:
if e.errno == 2:
raise RuntimeError('Could not locate the `mdb-export` executable. '
'Check that mdbtools is properly installed.')
raise RuntimeError(
"Could not locate the `mdb-export` executable. "
"Check that mdbtools is properly installed."
)
else:
raise
if 'Version_Schema_Field' not in mdb_headers:
raise ValueError('Version_Table of %s does not contain a Version_Schema_Field column'
% filename)
if "Version_Schema_Field" not in mdb_headers:
raise ValueError(
"Version_Table of %s does not contain a Version_Schema_Field column"
% filename
)
version_fields = dict(zip(mdb_headers, mdb_values))
version_text = version_fields['Version_Schema_Field']
version_match = re.fullmatch('Results File ([.0-9]+)', version_text)
version_text = version_fields["Version_Schema_Field"]
version_match = re.fullmatch("Results File ([.0-9]+)", version_text)
if not version_match:
raise ValueError('File version "%s" did not match expected format' % version_text)
raise ValueError(
'File version "%s" did not match expected format' % version_text
)
version_string = version_match.group(1)
version_tuple = tuple(map(int, version_string.split('.')))
version_tuple = tuple(map(int, version_string.split(".")))
return version_tuple
def convert_arbin_to_sqlite(input_file, output_file):
def convert_arbin_to_sqlite(input_file, output_file=None):
"""Read data from an Arbin .res data file and write to a sqlite file.
Any data currently in the sqlite file will be erased!
Any data currently in an sqlite file at `output_file` will be erased!
Parameters:
input_file (str): The path to the Arbin .res file to read from.
output_file (str or None): The path to the sqlite file to write to; if None,
return a `sqlite3.Connection` into an in-memory database.
Returns:
None or sqlite3.Connection
"""
arbin_version = mdb_get_version(input_file)
if output_file is None:
output_file = ":memory:"
s3db = sqlite3.connect(output_file)
tables_to_convert = copy(mdb_tables)
@@ -576,17 +614,24 @@ def convert_arbin_to_sqlite(input_file, output_file):
print("Vacuuming database...")
s3db.executescript("VACUUM; ANALYZE;")
if output_file == ":memory:":
return s3db
s3db.close()
def main(argv=None):
parser = argparse.ArgumentParser(
description="Convert Arbin .res files to sqlite3 databases using mdb-export",
)
parser.add_argument('input_file', type=str) # need file name to pass to sp.Popen
parser.add_argument('output_file', type=str) # need file name to pass to sqlite3.connect
parser.add_argument("input_file", type=str) # need file name to pass to sp.Popen
parser.add_argument(
"output_file", type=str
) # need file name to pass to sqlite3.connect
args = parser.parse_args(argv)
convert_arbin_to_sqlite(args.input_file, args.output_file)
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -1,32 +0,0 @@
#!/bin/sh
# SPDX-FileCopyrightText: 2014-2020 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
## Test data are posted on FigShare, listed in this article
# http://figshare.com/articles/galvani_test_data/1228760
mkdir -p tests/testdata
cd tests/testdata
/usr/bin/wget --continue -i - <<END_FILELIST
https://files.figshare.com/1778905/arbin1.res
https://files.figshare.com/1778937/bio_logic2.mpt
https://files.figshare.com/1778938/bio_logic5.mpt
https://files.figshare.com/1778939/bio_logic1.mpr
https://files.figshare.com/1778940/bio_logic6.mpr
https://files.figshare.com/1778941/bio_logic4.mpt
https://files.figshare.com/1778942/bio_logic5.mpr
https://files.figshare.com/1778943/bio_logic2.mpr
https://files.figshare.com/1778944/bio_logic6.mpt
https://files.figshare.com/1778945/bio_logic1.mpt
https://files.figshare.com/1778946/bio_logic3.mpr
https://files.figshare.com/1780444/bio_logic4.mpr
https://files.figshare.com/1780529/121_CA_455nm_6V_30min_C01.mpr
https://files.figshare.com/1780530/121_CA_455nm_6V_30min_C01.mpt
https://files.figshare.com/1780526/CV_C01.mpr
https://files.figshare.com/1780527/CV_C01.mpt
https://files.figshare.com/14752538/C019P-0ppb-A_C01.mpr
https://files.figshare.com/25331510/UM34_Test005E.res
END_FILELIST

View File

@@ -7,35 +7,35 @@ import os.path
from setuptools import setup
with open(os.path.join(os.path.dirname(__file__), 'README.md')) as f:
with open(os.path.join(os.path.dirname(__file__), "README.md")) as f:
readme = f.read()
setup(
name='galvani',
version='0.2.1',
description='Open and process battery charger log data files',
name="galvani",
version="0.5.0",
description="Open and process battery charger log data files",
long_description=readme,
long_description_content_type="text/markdown",
url='https://github.com/echemdata/galvani',
author='Chris Kerr',
author_email='chris.kerr@mykolab.ch',
license='GPLv3+',
url="https://codeberg.org/echemdata/galvani",
author="Chris Kerr",
author_email="chris.kerr@mykolab.ch",
license="GPLv3+",
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
'Natural Language :: English',
'Programming Language :: Python :: 3 :: Only',
'Topic :: Scientific/Engineering :: Chemistry',
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Natural Language :: English",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Chemistry",
],
packages=['galvani'],
packages=["galvani"],
entry_points={
'console_scripts': [
'res2sqlite = galvani.res2sqlite:main',
"console_scripts": [
"res2sqlite = galvani.res2sqlite:main",
],
},
python_requires='>=3.6',
install_requires=['numpy'],
tests_require=['pytest'],
python_requires=">=3.6",
install_requires=["numpy"],
tests_require=["pytest"],
)

View File

@@ -9,7 +9,7 @@ import os
import pytest
@pytest.fixture(scope='session')
@pytest.fixture(scope="session")
def testdata_dir():
"""Path to the testdata directory."""
return os.path.join(os.path.dirname(__file__), 'testdata')
return os.path.join(os.path.dirname(__file__), "testdata")

View File

@@ -13,8 +13,7 @@ import pytest
from galvani import res2sqlite
have_mdbtools = (subprocess.call(['which', 'mdb-export'],
stdout=subprocess.DEVNULL) == 0)
have_mdbtools = subprocess.call(["which", "mdb-export"], stdout=subprocess.DEVNULL) == 0
def test_res2sqlite_help():
@@ -22,39 +21,57 @@ def test_res2sqlite_help():
This should work even when mdbtools is not installed.
"""
help_output = subprocess.check_output(['res2sqlite', '--help'])
assert b'Convert Arbin .res files to sqlite3 databases' in help_output
help_output = subprocess.check_output(["res2sqlite", "--help"])
assert b"Convert Arbin .res files to sqlite3 databases" in help_output
@pytest.mark.skipif(have_mdbtools, reason='This tests the failure when mdbtools is not installed')
@pytest.mark.skipif(
have_mdbtools, reason="This tests the failure when mdbtools is not installed"
)
def test_convert_Arbin_no_mdbtools(testdata_dir, tmpdir):
"""Checks that the conversion fails with an appropriate error message."""
res_file = os.path.join(testdata_dir, 'arbin1.res')
sqlite_file = os.path.join(str(tmpdir), 'arbin1.s3db')
with pytest.raises(RuntimeError, match="Could not locate the `mdb-export` executable."):
res_file = os.path.join(testdata_dir, "arbin1.res")
sqlite_file = os.path.join(str(tmpdir), "arbin1.s3db")
with pytest.raises(
RuntimeError, match="Could not locate the `mdb-export` executable."
):
res2sqlite.convert_arbin_to_sqlite(res_file, sqlite_file)
@pytest.mark.skipif(not have_mdbtools, reason='Reading the Arbin file requires MDBTools')
@pytest.mark.parametrize('basename', ['arbin1', 'UM34_Test005E'])
@pytest.mark.skipif(
not have_mdbtools, reason="Reading the Arbin file requires MDBTools"
)
@pytest.mark.parametrize("basename", ["arbin1", "UM34_Test005E"])
def test_convert_Arbin_to_sqlite_function(testdata_dir, tmpdir, basename):
"""Convert an Arbin file to SQLite using the functional interface."""
res_file = os.path.join(testdata_dir, basename + '.res')
sqlite_file = os.path.join(str(tmpdir), basename + '.s3db')
res_file = os.path.join(testdata_dir, basename + ".res")
sqlite_file = os.path.join(str(tmpdir), basename + ".s3db")
res2sqlite.convert_arbin_to_sqlite(res_file, sqlite_file)
assert os.path.isfile(sqlite_file)
with sqlite3.connect(sqlite_file) as conn:
csr = conn.execute('SELECT * FROM Channel_Normal_Table;')
csr = conn.execute("SELECT * FROM Channel_Normal_Table;")
csr.fetchone()
@pytest.mark.skipif(not have_mdbtools, reason='Reading the Arbin file requires MDBTools')
@pytest.mark.parametrize("basename", ["arbin1", "UM34_Test005E"])
def test_convert_Arbin_to_sqlite_function_in_memory(testdata_dir, tmpdir, basename):
"""Convert an Arbin file to an in-memory SQLite database."""
res_file = os.path.join(testdata_dir, basename + ".res")
conn = None
with res2sqlite.convert_arbin_to_sqlite(res_file) as conn:
csr = conn.execute("SELECT * FROM Channel_Normal_Table;")
csr.fetchone()
@pytest.mark.skipif(
not have_mdbtools, reason="Reading the Arbin file requires MDBTools"
)
def test_convert_cmdline(testdata_dir, tmpdir):
"""Checks that the conversion fails with an appropriate error message."""
res_file = os.path.join(testdata_dir, 'arbin1.res')
sqlite_file = os.path.join(str(tmpdir), 'arbin1.s3db')
subprocess.check_call(['res2sqlite', res_file, sqlite_file])
res_file = os.path.join(testdata_dir, "arbin1.res")
sqlite_file = os.path.join(str(tmpdir), "arbin1.s3db")
subprocess.check_call(["res2sqlite", res_file, sqlite_file])
assert os.path.isfile(sqlite_file)
with sqlite3.connect(sqlite_file) as conn:
csr = conn.execute('SELECT * FROM Channel_Normal_Table;')
csr = conn.execute("SELECT * FROM Channel_Normal_Table;")
csr.fetchone()

View File

@@ -9,7 +9,7 @@ import re
from datetime import date, datetime
import numpy as np
from numpy.testing import assert_array_almost_equal, assert_array_equal
from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_allclose
import pytest
from galvani import BioLogic, MPTfile, MPRfile
@@ -17,33 +17,55 @@ from galvani.BioLogic import MPTfileCSV # not exported
def test_open_MPT(testdata_dir):
mpt1, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic1.mpt'))
mpt1, comments = MPTfile(os.path.join(testdata_dir, "bio_logic1.mpt"))
assert comments == []
assert mpt1.dtype.names == (
"mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V/mA", "Ewe/V", "dQ/mA.h", "P/W",
"I/mA", "(Q-Qo)/mA.h", "x",
"mode",
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
"time/s",
"control/V/mA",
"Ewe/V",
"dQ/mA.h",
"P/W",
"I/mA",
"(Q-Qo)/mA.h",
"x",
)
def test_open_MPT_fails_for_bad_file(testdata_dir):
with pytest.raises(ValueError, match='Bad first line'):
MPTfile(os.path.join(testdata_dir, 'bio_logic1.mpr'))
with pytest.raises(ValueError, match="Bad first line"):
MPTfile(os.path.join(testdata_dir, "bio_logic1.mpr"))
def test_open_MPT_csv(testdata_dir):
mpt1, comments = MPTfileCSV(os.path.join(testdata_dir, 'bio_logic1.mpt'))
mpt1, comments = MPTfileCSV(os.path.join(testdata_dir, "bio_logic1.mpt"))
assert comments == []
assert mpt1.fieldnames == [
"mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V/mA", "Ewe/V", "dq/mA.h", "P/W",
"<I>/mA", "(Q-Qo)/mA.h", "x",
"mode",
"ox/red",
"error",
"control changes",
"Ns changes",
"counter inc.",
"time/s",
"control/V/mA",
"Ewe/V",
"dq/mA.h",
"P/W",
"<I>/mA",
"(Q-Qo)/mA.h",
"x",
]
def test_open_MPT_csv_fails_for_bad_file(testdata_dir):
with pytest.raises((ValueError, UnicodeDecodeError)):
MPTfileCSV(os.path.join(testdata_dir, 'bio_logic1.mpr'))
MPTfileCSV(os.path.join(testdata_dir, "bio_logic1.mpr"))
def test_colID_map_uniqueness():
@@ -59,13 +81,16 @@ def test_colID_map_uniqueness():
assert not set(field_names).intersection(flag_names)
@pytest.mark.parametrize('colIDs, expected', [
([1, 2, 3], [('flags', 'u1')]),
([4, 6], [('time/s', '<f8'), ('Ewe/V', '<f4')]),
([1, 4, 21], [('flags', 'u1'), ('time/s', '<f8')]),
([4, 6, 4], [('time/s', '<f8'), ('Ewe/V', '<f4'), ('time/s 2', '<f8')]),
([4, 9999], NotImplementedError),
])
@pytest.mark.parametrize(
"colIDs, expected",
[
([1, 2, 3], [("flags", "u1")]),
([4, 6], [("time/s", "<f8"), ("Ewe/V", "<f4")]),
([1, 4, 21], [("flags", "u1"), ("time/s", "<f8")]),
([4, 6, 4], [("time/s", "<f8"), ("Ewe/V", "<f4"), ("time/s 2", "<f8")]),
([4, 9999], NotImplementedError),
],
)
def test_colID_to_dtype(colIDs, expected):
"""Test converting column ID to numpy dtype."""
if isinstance(expected, type) and issubclass(expected, Exception):
@@ -74,17 +99,20 @@ def test_colID_to_dtype(colIDs, expected):
return
expected_dtype = np.dtype(expected)
dtype, flags_dict = BioLogic.VMPdata_dtype_from_colIDs(colIDs)
assert dtype == expected_dtype
assert np.dtype(dtype) == expected_dtype
@pytest.mark.parametrize('data, expected', [
('02/23/17', date(2017, 2, 23)),
('10-03-05', date(2005, 10, 3)),
('11.12.20', date(2020, 11, 12)),
(b'01/02/03', date(2003, 1, 2)),
('13.08.07', ValueError),
('03-04/05', ValueError),
])
@pytest.mark.parametrize(
"data, expected",
[
("02/23/17", date(2017, 2, 23)),
("10-03-05", date(2005, 10, 3)),
("11.12.20", date(2020, 11, 12)),
(b"01/02/03", date(2003, 1, 2)),
("13.08.07", ValueError),
("03-04/05", ValueError),
],
)
def test_parse_BioLogic_date(data, expected):
"""Test the parse_BioLogic_date function."""
if isinstance(expected, type) and issubclass(expected, Exception):
@@ -95,50 +123,55 @@ def test_parse_BioLogic_date(data, expected):
assert result == expected
@pytest.mark.parametrize('filename, startdate, enddate', [
('bio_logic1.mpr', '2011-10-29', '2011-10-31'),
('bio_logic2.mpr', '2012-09-27', '2012-09-27'),
('bio_logic3.mpr', '2013-03-27', '2013-03-27'),
('bio_logic4.mpr', '2011-11-01', '2011-11-02'),
('bio_logic5.mpr', '2013-01-28', '2013-01-28'),
# bio_logic6.mpr has no end date because it does not have a VMP LOG module
('bio_logic6.mpr', '2012-09-11', None),
# C019P-0ppb-A_C01.mpr stores the date in a different format
('C019P-0ppb-A_C01.mpr', '2019-03-14', '2019-03-14'),
('Rapp_Error.mpr', '2010-12-02', '2010-12-02'),
])
@pytest.mark.parametrize(
"filename, startdate, enddate",
[
("bio_logic1.mpr", "2011-10-29", "2011-10-31"),
("bio_logic2.mpr", "2012-09-27", "2012-09-27"),
("bio_logic3.mpr", "2013-03-27", "2013-03-27"),
("bio_logic4.mpr", "2011-11-01", "2011-11-02"),
("bio_logic5.mpr", "2013-01-28", "2013-01-28"),
# bio_logic6.mpr has no end date because it does not have a VMP LOG module
("bio_logic6.mpr", "2012-09-11", None),
# C019P-0ppb-A_C01.mpr stores the date in a different format
("C019P-0ppb-A_C01.mpr", "2019-03-14", "2019-03-14"),
("Rapp_Error.mpr", "2010-12-02", "2010-12-02"),
("Ewe_Error.mpr", "2021-11-18", "2021-11-19"),
("col_27_issue_74.mpr", "2022-07-28", "2022-07-28"),
],
)
def test_MPR_dates(testdata_dir, filename, startdate, enddate):
"""Check that the start and end dates in .mpr files are read correctly."""
mpr = MPRfile(os.path.join(testdata_dir, filename))
assert mpr.startdate.strftime('%Y-%m-%d') == startdate
assert mpr.startdate.strftime("%Y-%m-%d") == startdate
if enddate:
assert mpr.enddate.strftime('%Y-%m-%d') == enddate
assert mpr.enddate.strftime("%Y-%m-%d") == enddate
else:
assert not hasattr(mpr, 'enddate')
assert not hasattr(mpr, "enddate")
def test_open_MPR_fails_for_bad_file(testdata_dir):
with pytest.raises(ValueError, match='Invalid magic for .mpr file'):
MPRfile(os.path.join(testdata_dir, 'arbin1.res'))
with pytest.raises(ValueError, match="Invalid magic for .mpr file"):
MPRfile(os.path.join(testdata_dir, "arbin1.res"))
def timestamp_from_comments(comments):
for line in comments:
time_match = re.match(b'Acquisition started on : ([0-9/]+ [0-9:]+)', line)
time_match = re.match(b"Acquisition started on : ([0-9/]+ [0-9:]+)", line)
if time_match:
timestamp = datetime.strptime(time_match.group(1).decode('ascii'),
'%m/%d/%Y %H:%M:%S')
timestamp = datetime.strptime(
time_match.group(1).decode("ascii"), "%m/%d/%Y %H:%M:%S"
)
return timestamp
raise AttributeError("No timestamp in comments")
def assert_MPR_matches_MPT(mpr, mpt, comments):
def assert_field_matches(fieldname, decimal):
if fieldname in mpr.dtype.fields:
assert_array_almost_equal(mpr.data[fieldname],
mpt[fieldname],
decimal=decimal)
assert_array_almost_equal(
mpr.data[fieldname], mpt[fieldname], decimal=decimal
)
def assert_field_exact(fieldname):
if fieldname in mpr.dtype.fields:
@@ -153,18 +186,18 @@ def assert_MPR_matches_MPT(mpr, mpt, comments):
# Nothing uses the 0x40 bit of the flags
assert_array_equal(mpr.get_flag("counter inc."), mpt["counter inc."])
assert_array_almost_equal(mpr.data["time/s"],
mpt["time/s"],
decimal=2) # 2 digits in CSV
assert_array_almost_equal(
mpr.data["time/s"], mpt["time/s"], decimal=2
) # 2 digits in CSV
assert_field_matches("control/V/mA", decimal=6)
assert_field_matches("control/V", decimal=6)
assert_array_almost_equal(mpr.data["Ewe/V"],
mpt["Ewe/V"],
decimal=6) # 32 bit float precision
assert_array_almost_equal(
mpr.data["Ewe/V"], mpt["Ewe/V"], decimal=6
) # 32 bit float precision
assert_field_matches("dQ/mA.h", decimal=17) # 64 bit float precision
assert_field_matches("dQ/mA.h", decimal=16) # 64 bit float precision
assert_field_matches("P/W", decimal=10) # 32 bit float precision for 1.xxE-5
assert_field_matches("I/mA", decimal=6) # 32 bit float precision
@@ -172,43 +205,175 @@ def assert_MPR_matches_MPT(mpr, mpt, comments):
assert_field_matches("(Q-Qo)/C", decimal=6) # 32 bit float precision
try:
assert timestamp_from_comments(comments) == mpr.timestamp
assert timestamp_from_comments(comments) == mpr.timestamp.replace(microsecond=0)
except AttributeError:
pass
@pytest.mark.parametrize('basename', [
'bio_logic1',
'bio_logic2',
# No bio_logic3.mpt file
'bio_logic4',
# bio_logic5 and bio_logic6 are special cases
'CV_C01',
'121_CA_455nm_6V_30min_C01',
])
def assert_MPR_matches_MPT_v2(mpr, mpt, comments):
"""
Asserts that the fields in the MPR.data ar the same as in the MPT.
Modified from assert_MPR_matches_MPT. Automatically converts dtype from MPT data
to dtype from MPR data before comparing the columns.
Special case for EIS_indicators: these fields are valid only at f<100kHz so their
values are replaced by -1 or 0 at high frequency in the MPT file, this is not the
case in the MPR data.
Parameters
----------
mpr : MPRfile
Data extracted with the MPRfile class.
mpt : np.array
Data extracted with MPTfile method.
Returns
-------
None.
"""
def assert_field_matches(fieldname):
EIS_quality_indicators = [
"THD Ewe/%",
"NSD Ewe/%",
"NSR Ewe/%",
"|Ewe h2|/V",
"|Ewe h3|/V",
"|Ewe h4|/V",
"|Ewe h5|/V",
"|Ewe h6|/V",
"|Ewe h7|/V",
"THD I/%",
"NSD I/%",
"NSR I/%",
"|I h2|/A",
"|I h3|/A",
"|I h4|/A",
"|I h5|/A",
"|I h6|/A",
"|I h7|/A",
]
if fieldname in EIS_quality_indicators: # EIS quality indicators only valid for f < 100kHz
index_inf_100k = np.where(mpr.data["freq/Hz"] < 100000)[0]
assert_allclose(
mpr.data[index_inf_100k][fieldname],
mpt[index_inf_100k][fieldname].astype(mpr.data[fieldname].dtype),
)
elif fieldname == "<Ewe>/V":
assert_allclose(
mpr.data[fieldname],
mpt["Ewe/V"].astype(mpr.data[fieldname].dtype),
)
elif fieldname == "<I>/mA":
assert_allclose(
mpr.data[fieldname],
mpt["I/mA"].astype(mpr.data[fieldname].dtype),
)
elif fieldname == "dq/mA.h":
assert_allclose(
mpr.data[fieldname],
mpt["dQ/mA.h"].astype(mpr.data[fieldname].dtype),
)
else:
assert_allclose(
mpr.data[fieldname],
mpt[fieldname].astype(mpr.data[fieldname].dtype),
)
def assert_field_exact(fieldname):
if fieldname in mpr.dtype.fields:
assert_array_equal(mpr.data[fieldname], mpt[fieldname])
for key in mpr.flags_dict.keys():
assert_array_equal(mpr.get_flag(key), mpt[key])
for d in mpr.dtype.descr[1:]:
assert_field_matches(d[0])
try:
assert timestamp_from_comments(comments) == mpr.timestamp.replace(microsecond=0)
except AttributeError:
pass
@pytest.mark.parametrize(
"basename",
[
"bio_logic1",
"bio_logic2",
# No bio_logic3.mpt file
"bio_logic4",
# bio_logic5 and bio_logic6 are special cases
"CV_C01",
"121_CA_455nm_6V_30min_C01",
"020-formation_CB5",
],
)
def test_MPR_matches_MPT(testdata_dir, basename):
"""Check the MPR parser against the MPT parser.
Load a binary .mpr file and a text .mpt file which should contain
exactly the same data. Check that the loaded data actually match.
"""
binpath = os.path.join(testdata_dir, basename + '.mpr')
txtpath = os.path.join(testdata_dir, basename + '.mpt')
binpath = os.path.join(testdata_dir, basename + ".mpr")
txtpath = os.path.join(testdata_dir, basename + ".mpt")
mpr = MPRfile(binpath)
mpt, comments = MPTfile(txtpath)
mpt, comments = MPTfile(txtpath, encoding="latin1")
assert_MPR_matches_MPT(mpr, mpt, comments)
def test_MPR5_matches_MPT5(testdata_dir):
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic5.mpr'))
mpt, comments = MPTfile((re.sub(b'\tXXX\t', b'\t0\t', line) for line in
open(os.path.join(testdata_dir, 'bio_logic5.mpt'),
mode='rb')))
mpr = MPRfile(os.path.join(testdata_dir, "bio_logic5.mpr"))
mpt, comments = MPTfile(
(
re.sub(b"\tXXX\t", b"\t0\t", line)
for line in open(os.path.join(testdata_dir, "bio_logic5.mpt"), mode="rb")
)
)
assert_MPR_matches_MPT(mpr, mpt, comments)
def test_MPR6_matches_MPT6(testdata_dir):
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic6.mpr'))
mpt, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic6.mpt'))
mpr = MPRfile(os.path.join(testdata_dir, "bio_logic6.mpr"))
mpt, comments = MPTfile(os.path.join(testdata_dir, "bio_logic6.mpt"))
mpr.data = mpr.data[:958] # .mpt file is incomplete
assert_MPR_matches_MPT(mpr, mpt, comments)
@pytest.mark.parametrize(
"basename_v1150",
["v1150_CA", "v1150_CP", "v1150_GCPL", "v1150_GEIS", "v1150_MB", "v1150_OCV", "v1150_PEIS"],
)
def test_MPR_matches_MPT_v1150(testdata_dir, basename_v1150):
"""Check the MPR parser against the MPT parser.
Load a binary .mpr file and a text .mpt file which should contain
exactly the same data. Check that the loaded data actually match.
"""
binpath = os.path.join(testdata_dir, "v1150", basename_v1150 + ".mpr")
txtpath = os.path.join(testdata_dir, "v1150", basename_v1150 + ".mpt")
mpr = MPRfile(binpath)
mpt, comments = MPTfile(txtpath, encoding="latin1")
assert_MPR_matches_MPT_v2(mpr, mpt, comments)
@pytest.mark.skip(reason="Test data file is missing")
def test_loop_from_file(testdata_dir):
"""Check if the loop_index is correctly extracted from the _LOOP.txt file
"""
mpr = MPRfile(os.path.join(testdata_dir, "running", "running_OCV.mpr"))
assert mpr.loop_index is not None, "No loop_index found"
assert len(mpr.loop_index) == 4, "loop_index is not the right size"
assert_array_equal(mpr.loop_index, [0, 4, 8, 11], "loop_index values are wrong")
@pytest.mark.skip(reason="Test data file is missing")
def test_timestamp_from_file(testdata_dir):
"""Check if the loop_index is correctly extracted from the _LOOP.txt file
"""
mpr = MPRfile(os.path.join(testdata_dir, "running", "running_OCV.mpr"))
assert hasattr(mpr, "timestamp"), "No timestamp found"
assert mpr.timestamp.timestamp() == pytest.approx(1707299985.908), "timestamp value is wrong"

BIN
tests/testdata/020-formation_CB5.mpr LFS vendored Normal file

Binary file not shown.

View File

@@ -0,0 +1,2 @@
SPDX-FileCopyrightText Chihyu Chen <chihyu.chen@molicel.com>
SPDX-License-Identifier CC-BY-4.0

BIN
tests/testdata/020-formation_CB5.mpt LFS vendored Normal file

Binary file not shown.

View File

@@ -0,0 +1,2 @@
SPDX-FileCopyrightText Chihyu Chen <chihyu.chen@molicel.com>
SPDX-License-Identifier CC-BY-4.0

BIN
tests/testdata/121_CA_455nm_6V_30min_C01.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/121_CA_455nm_6V_30min_C01.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/C019P-0ppb-A_C01.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/CV_C01.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/CV_C01.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/EIS_latin1.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/Ewe_Error.mpr LFS vendored Normal file

Binary file not shown.

2
tests/testdata/Ewe_Error.mpr.license vendored Normal file
View File

@@ -0,0 +1,2 @@
SPDX-FileCopyrightText Danzi Federico
SPDX-License-Identifier CC-BY-4.0

BIN
tests/testdata/Rapp_Error.mpr LFS vendored Normal file

Binary file not shown.

2
tests/testdata/Rapp_Error.mpr.license vendored Normal file
View File

@@ -0,0 +1,2 @@
SPDX-FileCopyrightText Danzi Federico
SPDX-License-Identifier CC-BY-4.0

BIN
tests/testdata/UM34_Test005E.res LFS vendored Normal file

Binary file not shown.

View File

@@ -0,0 +1,2 @@
SPDX-FileCopyrightText Nikhil Shetty
SPDX-License-Identifier CC-BY-4.0

BIN
tests/testdata/arbin1.res LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic1.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic1.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic2.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic2.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic3.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic4.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic4.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic5.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic5.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic6.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/bio_logic6.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/col_27_issue_74.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CA.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CA.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CP.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_CP.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GCPL.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GCPL.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GEIS.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_GEIS.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_MB.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_MB.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_OCV.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_OCV.mpt LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_PEIS.mpr LFS vendored Normal file

Binary file not shown.

BIN
tests/testdata/v1150/v1150_PEIS.mpt LFS vendored Normal file

Binary file not shown.

View File

@@ -1,7 +1,7 @@
# SPDX-FileCopyrightText: 2017-2021 Christopher Kerr <chris.kerr@mykolab.ch>
# SPDX-License-Identifier: GPL-3.0-or-later
[tox]
envlist = py36,py37,py38,py39
envlist = py38,py39,py310,py311
[testenv]
deps =
flake8
@@ -15,3 +15,10 @@ commands =
[flake8]
exclude = build,dist,*.egg-info,.cache,.git,.tox,__pycache__
max-line-length = 100
[gh]
python =
3.11 = py311
3.10 = py310
3.9 = py39
3.8 = py38