111 Commits
0.0.1 ... 0.2.1

Author SHA1 Message Date
54c3813b32 Update version in setup.py to 0.2.1 2021-09-01 07:13:49 +03:00
3b68a30975 Merge pull request #63 from chatcannon/time-format-dotted
Add '%m.%d.%y' date format for .mpr file timestamps
2021-09-01 07:06:58 +03:00
7ef5be147b Merge pull request #66 from chatcannon/add-column-R-Ohm
Add column IDs for 'R/Ohm' and 'Rapp/Ohm'
2021-08-31 18:19:09 +03:00
de182bd400 Add a test for the Rapp/Ohm column ID 2021-08-30 19:45:30 +03:00
9bbff69b36 Add column IDs for 'R/Ohm' and 'Rapp/Ohm' 2021-08-30 19:42:26 +03:00
bcd7c5a9b8 Add a test for the parse_BioLogic_date function 2021-07-03 16:08:48 +03:00
4ebdc663a9 Factor the date parsing code out to a separate function 2021-07-02 08:05:27 +03:00
cec14e6d50 Add '%m.%d.%y' date format for .mpr file timestamps
Closes #60
2021-05-23 21:04:29 +03:00
dd605e8fd3 Merge pull request #61 from chatcannon/fsfe-reuse-metadata
Add REUSE metadata
2021-04-25 20:25:16 +03:00
352fc436c0 Add REUSE check to the tox tests 2021-03-21 10:20:40 +02:00
def2bba587 Add MANIFEST.in to include licence files in the source tarball
Exclude the MIT licence since the GitHub CodeQL file is not packaged.
2021-03-20 16:30:08 +02:00
741b17d54d Wheel package actually only needs GPLv3 licence file
The files with other licences are not included in the wheel package
2021-03-20 16:25:55 +02:00
0560643ea2 Add setup.cfg to include licences in .whl package 2021-03-20 16:16:37 +02:00
d00319fcda Add GPLv3+ License-Identifier to remaining files 2021-03-20 16:13:08 +02:00
c57cd523ff Use CC0 Licence for non-copyrightable data 2021-03-20 16:09:15 +02:00
4257a294fb Add SPDX-FileCopyrightText to BioLogic.py
There are additional committers who have made changes to this
file, but only adding new colIDs etc., which is not copyrightable.

Here is the corresponding git-shortlog output:

Dennis (1):
      Improved compatibility with .mpr files

Peter Attia (1):
      Update BioLogic.py

Tim (3):
      improved parsing for PEIS files
      new column types
      new column types

dennissheberla (2):
      Improved compatibility with .mpt files
      Improved compatibility with new .mpr files

nhshetty-99 (3):
      Added colIDs 74 and 462 to VMPdata_colID_dtype_map
      Changed colID 74 and 462 order from original addition
      Added column 469 to BioLogic.py
2021-03-20 16:03:26 +02:00
b8742bf1ee Add SPDX-FileCopyrightText for the README 2021-03-20 15:35:25 +02:00
a78b7113a7 Remove .flake8 file which is no longer used
The flake8 configuration is in tox.ini instead.
2021-03-20 15:32:19 +02:00
635655e481 Add licence metadata for imported CodeQL file 2021-03-20 15:31:10 +02:00
4b2042501d Add SPDX-FileCopyrightText for sole author files 2021-03-20 15:26:40 +02:00
dd9cf01396 Ran reuse init to create initial REUSE metadata 2021-03-20 15:07:47 +02:00
ce011f2f37 Merge pull request #57 from echemdata/add-codeql-analysis
Enable CodeQL security analysis
2021-03-20 14:58:40 +02:00
e11419e851 Create codeql-analysis.yml
As suggested by https://github.com/echemdata/galvani/security
2020-11-07 17:25:47 +02:00
74413c231e Merge pull request #56 from chatcannon/release-0.2.0
Bump release version to 0.2.0
2020-10-31 08:59:18 +02:00
ca8845bcc9 Bump release version to 0.2.0
Update Python version requirement to >=3.6
2020-10-31 08:52:45 +02:00
db9b0c0669 Merge pull request #54 from bayesfactor/master
Many new column types
2020-10-31 08:47:19 +02:00
995bcf3d71 Merge pull request #55 from chatcannon/test-py39
Test with Python 3.9, stop testing Python 3.5
2020-10-31 08:45:35 +02:00
63f674d897 Test with Python 3.9, stop testing Python 3.5
Python 3.5 is now end-of-life, Python 3.9 was released 2020-10-05
2020-10-31 08:31:54 +02:00
21085454fd Merge pull request #53 from chatcannon/arbin-5-26
Add support for Arbin .res files with version number up to 5.26
2020-10-31 08:30:45 +02:00
23761dd5bf Add the new data file to test_Arbin.py 2020-10-31 07:51:57 +02:00
8cfc84922b Add example Arbin 5.26 data file to get_testdata.sh 2020-10-31 07:50:06 +02:00
Tim
5baa6f6a7f new column types
ec-lab v3 has new column types
2020-10-25 05:58:53 -07:00
Tim
0757306be4 Merge pull request #1 from echemdata/master
get the latest
2020-10-25 05:48:16 -07:00
68e00a30ce Add PRIMARY KEY information 2020-10-17 18:34:06 +03:00
a1a056d304 Add two new tables that are in 5.26 but not 5.23
Can_BMS_Info_Table and Can_BMS_Data_Table
2020-10-17 18:10:58 +03:00
c90d604096 Use version number to build a list of tables to convert 2020-10-17 17:57:06 +03:00
c25e755296 Reformat lists of table names 2020-10-17 17:48:03 +03:00
60639299b8 Parse out version number from Version_Table
Previously the version was compared for strict equality so a
higher version did not match.
2020-10-11 20:40:23 +03:00
a0700b0276 Add extra columns to Global and Battery Data tables for v5.26
Arbin software v5.26 adds additional columns to these existing
tables; these need to be added to the output database so that
the conversion does not fail.
2020-10-11 16:33:25 +03:00
f0c3c6f6c5 Merge pull request #52 from petermattia/patch-2
Add usage example to README
2020-10-11 16:08:46 +03:00
Peter Attia
2a75b3bb19 Update README.md
Show quick example of how to use
2020-10-08 09:12:38 -07:00
a2b3b26917 Merge pull request #50 from petermattia/patch-1
Update BioLogic.py
2020-09-26 12:56:27 +03:00
Peter Attia
0c5348deeb Update BioLogic.py
Add more impedance-related entries to `VMPdata_colID_dtype_map`
2020-09-14 20:52:30 -04:00
90b113a0ab Merge pull request #38 from bcolsen/col_168
Added RCMP column and column debug info
2020-07-09 19:59:48 +03:00
bcolsen
18a1ce6848 fix pep 2020-07-07 18:13:53 -06:00
bcolsen
18e8a450fa pep8 fix 2020-07-07 18:07:30 -06:00
bcolsen
4098890c05 changed error to report previous column name 2020-07-07 17:57:59 -06:00
bcolsen
68bac0a5aa Added RCMP column and column debug info 2020-07-07 17:57:59 -06:00
a343d37b2a Merge pull request #40 from chatcannon/remove-python-2
Remove Python 2 support
2020-07-05 11:05:08 +03:00
e67edf9e17 Merge pull request #43 from nhshetty-99/master
Added colIDs 74 and 462 to VMPdata_colID_dtype_map
2020-07-03 21:30:30 +03:00
nhshetty-99
37487da5d3 Added column 469 to BioLogic.py 2020-07-03 14:15:37 -04:00
nhshetty-99
8370a58109 Changed colID 74 and 462 order from original addition 2020-06-28 13:06:11 -04:00
nhshetty-99
c703f31da2 Added colIDs 74 and 462 to VMPdata_colID_dtype_map 2020-06-27 22:40:31 -04:00
81fbb3dde3 Test with all Python versions 3.5-3.8 2020-02-16 14:35:34 +02:00
b2fb092ea3 Remove compability code for Python 2 subprocess module 2020-02-16 14:28:57 +02:00
599413c42f Remove remaining Python 2 compatibility from BioLogic code 2020-02-16 10:01:53 +02:00
87825b7891 Remove str3 compatibility function 2020-02-16 09:59:46 +02:00
c2e7a1602f Remove maketrans compatibility code 2020-02-16 09:49:40 +02:00
9ba43ecc2e Add python_requires to setup.py 2020-02-16 09:46:06 +02:00
bfdc9aae28 Remove Python 2 from tox and travis tests 2020-02-16 09:34:57 +02:00
a74a0267c2 Merge pull request #36 from bcolsen/version3
Added initial support for VMP data module version 3
2019-10-08 22:22:37 +02:00
bcolsen
72d79146e6 Added initial support for VMP data module version 3 2019-10-08 13:23:18 -06:00
0c0b48ddcc Release version 0.1.0 (#33)
Update package URL to the new echemdata repo
2019-06-02 13:34:39 +02:00
e71076bda3 Merge pull request #30 from chatcannon/test-and-fix-Arbin
Fix Arbin reading code
2019-06-02 13:18:32 +02:00
0ea049e279 Merge branch 'master' into test-and-fix-Arbin 2019-05-16 17:29:38 +02:00
a4cf8c1420 Merge pull request #32 from chatcannon/flake8
Add code style checking with flake8
2019-05-16 17:25:36 +02:00
aab135391a Change max-line-length to 100 and refactor all longer lines 2019-05-15 07:09:11 +02:00
8abab57c06 Fixed some more flake8 warnings 2019-05-12 09:20:33 +02:00
3440047dc2 Fixed flake8 warning about lambda assignment 2019-05-12 09:15:20 +02:00
d137bfccef Add flake8 to tox.ini 2019-05-12 09:14:57 +02:00
ed43de1326 Fix flake8 warnings on comments style 2019-05-12 09:13:01 +02:00
1c8335289a Add flake8 configuration 2019-05-12 09:12:40 +02:00
6787a7ec03 Merge branch 'master' into test-and-fix-Arbin 2019-05-05 08:14:43 +02:00
e5aada3a85 Merge branch 'master' into flake8 2019-05-05 08:12:52 +02:00
a41b40c7a4 Merge pull request #26 from bcolsen/dupli_cols
[PR] Biologic file with duplicate columns
2019-05-04 08:38:36 +02:00
2a36713b06 Fixed some flake8 warnings in res2sqlite.py 2019-05-03 20:37:27 +02:00
f2b62265b9 Fixed some flake8 warnings in BioLogic.py 2019-05-03 20:30:12 +02:00
61e2ac8f57 Added unit tests for the VMPdata_dtype_from_colIDs function 2019-05-03 19:46:45 +02:00
c401aca741 Get rid of flags2_dict as flags2 doesn't actually exist 2019-05-03 19:15:34 +02:00
1f57e48602 Refactor the VMPdata_dtype_from_colIDs function 2019-05-03 19:12:18 +02:00
6b0f8b6d37 Add a VMPdata_colID_flags_map dict 2019-05-03 18:47:02 +02:00
56a321f8e3 Formatting 2019-05-03 18:39:42 +02:00
d991cd496e Merge branch 'master' into dupli_cols 2019-05-03 18:28:38 +02:00
531cfc6a42 Merge pull request #24 from bcolsen/loop_module
[PR] Add parsing of loop index modules
2019-04-12 07:51:15 +02:00
bcolsen
ef1ea9a2f4 Default loop to none and trim the trailing zeros 2019-04-11 17:30:05 -06:00
bcolsen
b3c5f36e11 flag fix 2019-04-11 11:59:18 -06:00
bcolsen
d1d53e97fa biologic file with duplicate columns 2019-04-09 17:48:38 -06:00
bcolsen
4ba61aa5d8 added parsing of loop_modules 2019-04-09 17:23:16 -06:00
4381b02242 Add .pytest_cache to Travis cache 2019-04-03 08:16:55 +02:00
846a5b3149 Catch FileNotFoundError from Popen and re-raise a more helpful message 2019-04-03 08:14:05 +02:00
6a8fbe71a4 Add some tests for the res2sqlite command-line tool
Check that the --help option works even if mdbtools is not installed
2019-04-02 23:11:37 +02:00
557e755f03 Move Popen call outside the try/finally block
Ensure that all variables used in the except and finally blocks
are always defined - fixes #23

In Python 3, Popen objects can be used as contextmanagers, but not
in Python 2.7
2019-04-02 23:09:53 +02:00
a1b73867ea Add a test that a sensible error is raised when MDBTools is not found
This is the error that happens in issue #23
2019-04-02 23:07:12 +02:00
5530a7a8ff Add a simple test for loading Arbin .res files 2019-04-02 23:06:23 +02:00
d6d6bf1ac7 Use a pytest fixture to locate the testdata directory 2019-04-02 21:34:34 +02:00
85cc3f523e Release version 0.0.2 2019-03-30 15:52:44 +01:00
b977115d6e Also try parsing dates as '%m-%d-%y' - fixes #20 2019-03-30 15:39:40 +01:00
2471148c21 Add new test data file with different date format
This tests for issue #20

Thanks @JBWarrington for providing this file
2019-03-30 15:38:58 +01:00
7a5887fb38 Update existing get_testdata links to HTTPS version 2019-03-30 15:31:18 +01:00
2738396c9e Merge branch 'numpy-deprecations'
Fixes some deprecation warnings with recent versions of Numpy and Python

Closes #22
2019-03-16 13:51:39 +01:00
dcc8ec7fcc Fix 'invalid escape sequence' warnings 2019-03-16 13:50:50 +01:00
b08c2f4435 Use array.item() instead of np.asscalar() 2019-03-16 13:43:26 +01:00
1bcbc16bab Use np.frombuffer instead of np.fromstring
Fixes #22
2019-03-16 13:41:49 +01:00
e52efeb9bd Merge branch 'pytest'
Changes from `nosetests` to `pytest` for running the tests - fixes #5
Pytest is considerably more flexible, and Nose is no longer maintained.
2019-03-16 13:35:53 +01:00
d1e8616f1e Use pytest.mark.parametrize to combine test cases 2019-03-16 13:35:13 +01:00
a618f75bb6 Change testing config to use pytest instead of nosetests
Closes #5
2019-03-16 13:10:08 +01:00
b110162763 Replace nose.raises with pytest.raises 2019-03-16 13:03:54 +01:00
de29b0863c Replace nose.eq_ with assert x == y 2019-03-16 12:59:23 +01:00
4365c08e8b Merge pull request #19 from bayesfactor/patch-2
new column types
2019-03-16 11:53:17 +01:00
Tim
880b4a0a2d new column types
Introduced new column types that show up in GEIS files
2019-03-11 10:23:26 -07:00
da67a36308 Merge branch 'pypi-release'
Closes #7
2019-03-10 10:44:49 +01:00
20 changed files with 1206 additions and 376 deletions

71
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
# SPDX-FileCopyrightText: 2006-2020 GitHub, Inc.
# SPDX-License-Identifier: MIT
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
# ******** NOTE ********
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '22 23 * * 1'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more...
# https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#overriding-automatic-language-detection
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

3
.gitignore vendored
View File

@@ -1,3 +1,6 @@
# SPDX-FileCopyrightText: 2013-2017 Christopher Kerr <chris.kerr@mykolab.ch>
# SPDX-License-Identifier: CC0-1.0
*.py[cod]
# C extensions

10
.reuse/dep5 Normal file
View File

@@ -0,0 +1,10 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: Galvani
Upstream-Contact: Christopher Kerr <chris.kerr@mykolab.ch>
Source: https://github.com/echemdata/galvani
# Sample paragraph, commented out:
#
# Files: src/*
# Copyright: $YEAR $NAME <$CONTACT>
# License: ...

View File

@@ -1,13 +1,17 @@
# SPDX-FileCopyrightText: 2017-2020 Christopher Kerr <chris.kerr@mykolab.ch>
# SPDX-License-Identifier: GPL-3.0-or-later
sudo: false
language: python
cache:
directories:
- .tox
- .pytest_cache
- tests/testdata
python:
- "2.7"
- "3.5"
# - "3.7" # Python 3.7 is not available on travis CI yet
- "3.6"
- "3.7"
- "3.8"
- "3.9"
install:
- pip install tox-travis
- sh get_testdata.sh

121
LICENSES/CC0-1.0.txt Normal file
View File

@@ -0,0 +1,121 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

View File

@@ -0,0 +1,232 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>.

9
LICENSES/MIT.txt Normal file
View File

@@ -0,0 +1,9 @@
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

9
MANIFEST.in Normal file
View File

@@ -0,0 +1,9 @@
# SPDX-FileCopyrightText: 2021 Christopher Kerr
# SPDX-License-Identifier: CC0-1.0
recursive-include LICENSES *.txt
include README.md
# The GitHub CodeQL file is not included in the tarball,
# so its licence does not need to be included either
exclude LICENSES/MIT.txt

View File

@@ -1,12 +1,26 @@
galvani
=======
<!---
SPDX-FileCopyrightText: 2013-2020 Christopher Kerr, Peter Attia
SPDX-License-Identifier: GPL-3.0-or-later
-->
Read proprietary file formats from electrochemical test stations
## Bio-Logic .mpr files ##
Use the `MPRfile` class from BioLogic.py (exported in the main package)
````
from galvani import BioLogic
import pandas as pd
mpr_file = BioLogic.MPRfile('test.mpr')
df = pd.DataFrame(mpr_file.data)
````
## Arbin .res files ##
Use the res2sqlite.py script to convert the .res file to a sqlite3 database

View File

@@ -1,27 +1,22 @@
# -*- coding: utf-8 -*-
"""Code to read in data files from Bio-Logic instruments"""
# SPDX-FileCopyrightText: 2013-2020 Christopher Kerr, "bcolsen"
#
# SPDX-License-Identifier: GPL-3.0-or-later
__all__ = ['MPTfileCSV', 'MPTfile']
import sys
import re
import csv
from os import SEEK_SET
import time
from datetime import date, datetime, timedelta
from collections import OrderedDict
from collections import defaultdict, OrderedDict
import numpy as np
if sys.version_info.major <= 2:
str3 = str
from string import maketrans
else:
str3 = lambda b: str(b, encoding='ascii')
maketrans = bytes.maketrans
def fieldname_to_dtype(fieldname):
"""Converts a column header from the MPT file into a tuple of
canonical name and appropriate numpy dtype"""
@@ -48,13 +43,13 @@ def fieldname_to_dtype(fieldname):
raise ValueError("Invalid column header: %s" % fieldname)
def comma_converter(float_string):
"""Convert numbers to floats whether the decimal point is '.' or ','"""
trans_table = maketrans(b',', b'.')
return float(float_string.translate(trans_table))
def comma_converter(float_text):
"""Convert text to float whether the decimal point is '.' or ','"""
trans_table = bytes.maketrans(b',', b'.')
return float(float_text.translate(trans_table))
def MPTfile(file_or_path):
def MPTfile(file_or_path, encoding='ascii'):
"""Opens .mpt files as numpy record arrays
Checks for the correct headings, skips any comments and returns a
@@ -70,19 +65,20 @@ def MPTfile(file_or_path):
if magic != b'EC-Lab ASCII FILE\r\n':
raise ValueError("Bad first line for EC-Lab file: '%s'" % magic)
nb_headers_match = re.match(b'Nb header lines : (\d+)\s*$', next(mpt_file))
nb_headers_match = re.match(rb'Nb header lines : (\d+)\s*$',
next(mpt_file))
nb_headers = int(nb_headers_match.group(1))
if nb_headers < 3:
raise ValueError("Too few header lines: %d" % nb_headers)
## The 'magic number' line, the 'Nb headers' line and the column headers
## make three lines. Every additional line is a comment line.
# The 'magic number' line, the 'Nb headers' line and the column headers
# make three lines. Every additional line is a comment line.
comments = [next(mpt_file) for i in range(nb_headers - 3)]
fieldnames = str3(next(mpt_file)).strip().split('\t')
fieldnames = next(mpt_file).decode(encoding).strip().split('\t')
record_type = np.dtype(list(map(fieldname_to_dtype, fieldnames)))
## Must be able to parse files where commas are used for decimal points
# Must be able to parse files where commas are used for decimal points
converter_dict = dict(((i, comma_converter)
for i in range(len(fieldnames))))
mpt_array = np.loadtxt(mpt_file, dtype=record_type,
@@ -107,13 +103,13 @@ def MPTfileCSV(file_or_path):
if magic.rstrip() != 'EC-Lab ASCII FILE':
raise ValueError("Bad first line for EC-Lab file: '%s'" % magic)
nb_headers_match = re.match('Nb header lines : (\d+)\s*$', next(mpt_file))
nb_headers_match = re.match(r'Nb header lines : (\d+)\s*$', next(mpt_file))
nb_headers = int(nb_headers_match.group(1))
if nb_headers < 3:
raise ValueError("Too few header lines: %d" % nb_headers)
## The 'magic number' line, the 'Nb headers' line and the column headers
## make three lines. Every additional line is a comment line.
# The 'magic number' line, the 'Nb headers' line and the column headers
# make three lines. Every additional line is a comment line.
comments = [next(mpt_file) for i in range(nb_headers - 3)]
mpt_csv = csv.DictReader(mpt_file, dialect='excel-tab')
@@ -143,109 +139,179 @@ VMPmodule_hdr = np.dtype([('shortname', 'S10'),
('version', '<u4'),
('date', 'S8')])
# Maps from colID to a tuple defining a numpy dtype
VMPdata_colID_dtype_map = {
4: ('time/s', '<f8'),
5: ('control/V/mA', '<f4'),
6: ('Ewe/V', '<f4'),
7: ('dQ/mA.h', '<f8'),
8: ('I/mA', '<f4'), # 8 is either I or <I> ??
9: ('Ece/V', '<f4'),
11: ('I/mA', '<f8'),
13: ('(Q-Qo)/mA.h', '<f8'),
16: ('Analog IN 1/V', '<f4'),
19: ('control/V', '<f4'),
20: ('control/mA', '<f4'),
23: ('dQ/mA.h', '<f8'), # Same as 7?
24: ('cycle number', '<f8'),
26: ('Rapp/Ohm', '<f4'),
32: ('freq/Hz', '<f4'),
33: ('|Ewe|/V', '<f4'),
34: ('|I|/A', '<f4'),
35: ('Phase(Z)/deg', '<f4'),
36: ('|Z|/Ohm', '<f4'),
37: ('Re(Z)/Ohm', '<f4'),
38: ('-Im(Z)/Ohm', '<f4'),
39: ('I Range', '<u2'),
69: ('R/Ohm', '<f4'),
70: ('P/W', '<f4'),
74: ('Energy/W.h', '<f8'),
75: ('Analog OUT/V', '<f4'),
76: ('<I>/mA', '<f4'),
77: ('<Ewe>/V', '<f4'),
78: ('Cs-2/µF-2', '<f4'),
96: ('|Ece|/V', '<f4'),
98: ('Phase(Zce)/deg', '<f4'),
99: ('|Zce|/Ohm', '<f4'),
100: ('Re(Zce)/Ohm', '<f4'),
101: ('-Im(Zce)/Ohm', '<f4'),
123: ('Energy charge/W.h', '<f8'),
124: ('Energy discharge/W.h', '<f8'),
125: ('Capacitance charge/µF', '<f8'),
126: ('Capacitance discharge/µF', '<f8'),
131: ('Ns', '<u2'),
163: ('|Estack|/V', '<f4'),
168: ('Rcmp/Ohm', '<f4'),
169: ('Cs/µF', '<f4'),
172: ('Cp/µF', '<f4'),
173: ('Cp-2/µF-2', '<f4'),
241: ('|E1|/V', '<f4'),
242: ('|E2|/V', '<f4'),
271: ('Phase(Z1) / deg', '<f4'),
272: ('Phase(Z2) / deg', '<f4'),
301: ('|Z1|/Ohm', '<f4'),
302: ('|Z2|/Ohm', '<f4'),
331: ('Re(Z1)/Ohm', '<f4'),
332: ('Re(Z2)/Ohm', '<f4'),
361: ('-Im(Z1)/Ohm', '<f4'),
362: ('-Im(Z2)/Ohm', '<f4'),
391: ('<E1>/V', '<f4'),
392: ('<E2>/V', '<f4'),
422: ('Phase(Zstack)/deg', '<f4'),
423: ('|Zstack|/Ohm', '<f4'),
424: ('Re(Zstack)/Ohm', '<f4'),
425: ('-Im(Zstack)/Ohm', '<f4'),
426: ('<Estack>/V', '<f4'),
430: ('Phase(Zwe-ce)/deg', '<f4'),
431: ('|Zwe-ce|/Ohm', '<f4'),
432: ('Re(Zwe-ce)/Ohm', '<f4'),
433: ('-Im(Zwe-ce)/Ohm', '<f4'),
434: ('(Q-Qo)/C', '<f4'),
435: ('dQ/C', '<f4'),
441: ('<Ecv>/V', '<f4'),
462: ('Temperature/°C', '<f4'),
467: ('Q charge/discharge/mA.h', '<f8'),
468: ('half cycle', '<u4'),
469: ('z cycle', '<u4'),
471: ('<Ece>/V', '<f4'),
473: ('THD Ewe/%', '<f4'),
474: ('THD I/%', '<f4'),
476: ('NSD Ewe/%', '<f4'),
477: ('NSD I/%', '<f4'),
479: ('NSR Ewe/%', '<f4'),
480: ('NSR I/%', '<f4'),
486: ('|Ewe h2|/V', '<f4'),
487: ('|Ewe h3|/V', '<f4'),
488: ('|Ewe h4|/V', '<f4'),
489: ('|Ewe h5|/V', '<f4'),
490: ('|Ewe h6|/V', '<f4'),
491: ('|Ewe h7|/V', '<f4'),
492: ('|I h2|/A', '<f4'),
493: ('|I h3|/A', '<f4'),
494: ('|I h4|/A', '<f4'),
495: ('|I h5|/A', '<f4'),
496: ('|I h6|/A', '<f4'),
497: ('|I h7|/A', '<f4'),
}
# These column IDs define flags which are all stored packed in a single byte
# The values in the map are (name, bitmask, dtype)
VMPdata_colID_flag_map = {
1: ('mode', 0x03, np.uint8),
2: ('ox/red', 0x04, np.bool_),
3: ('error', 0x08, np.bool_),
21: ('control changes', 0x10, np.bool_),
31: ('Ns changes', 0x20, np.bool_),
65: ('counter inc.', 0x80, np.bool_),
}
def parse_BioLogic_date(date_text):
"""Parse a date from one of the various formats used by Bio-Logic files."""
date_formats = ['%m/%d/%y', '%m-%d-%y', '%m.%d.%y']
if isinstance(date_text, bytes):
date_string = date_text.decode('ascii')
else:
date_string = date_text
for date_format in date_formats:
try:
tm = time.strptime(date_string, date_format)
except ValueError:
continue
else:
break
else:
raise ValueError(f'Could not parse timestamp {date_string!r}'
f' with any of the formats {date_formats}')
return date(tm.tm_year, tm.tm_mon, tm.tm_mday)
def VMPdata_dtype_from_colIDs(colIDs):
dtype_dict = OrderedDict()
"""Get a numpy record type from a list of column ID numbers.
The binary layout of the data in the MPR file is described by the sequence
of column ID numbers in the file header. This function converts that
sequence into a numpy dtype which can then be used to load data from the
file with np.frombuffer().
Some column IDs refer to small values which are packed into a single byte.
The second return value is a dict describing the bit masks with which to
extract these columns from the flags byte.
"""
type_list = []
field_name_counts = defaultdict(int)
flags_dict = OrderedDict()
flags2_dict = OrderedDict()
for colID in colIDs:
if colID in (1, 2, 3, 21, 31, 65):
dtype_dict['flags'] = 'u1'
if colID == 1:
flags_dict['mode'] = (np.uint8(0x03), np.uint8)
elif colID == 2:
flags_dict['ox/red'] = (np.uint8(0x04), np.bool_)
elif colID == 3:
flags_dict['error'] = (np.uint8(0x08), np.bool_)
elif colID == 21:
flags_dict['control changes'] = (np.uint8(0x10), np.bool_)
elif colID == 31:
flags_dict['Ns changes'] = (np.uint8(0x20), np.bool_)
elif colID == 65:
flags_dict['counter inc.'] = (np.uint8(0x80), np.bool_)
if colID in VMPdata_colID_flag_map:
# Some column IDs represent boolean flags or small integers
# These are all packed into a single 'flags' byte whose position
# in the overall record is determined by the position of the first
# column ID of flag type. If there are several flags present,
# there is still only one 'flags' int
if 'flags' not in field_name_counts:
type_list.append(('flags', 'u1'))
field_name_counts['flags'] = 1
flag_name, flag_mask, flag_type = VMPdata_colID_flag_map[colID]
# TODO what happens if a flag colID has already been seen
# i.e. if flag_name is already present in flags_dict?
# Does it create a second 'flags' byte in the record?
flags_dict[flag_name] = (np.uint8(flag_mask), flag_type)
elif colID in VMPdata_colID_dtype_map:
field_name, field_type = VMPdata_colID_dtype_map[colID]
field_name_counts[field_name] += 1
count = field_name_counts[field_name]
if count > 1:
unique_field_name = '%s %d' % (field_name, count)
else:
raise NotImplementedError("flag %d not implemented" % colID)
elif colID == 4:
dtype_dict['time/s'] = '<f8'
elif colID == 5:
dtype_dict['control/V/mA'] = '<f4'
# 6 is Ewe, 77 is <Ewe>, I don't see the difference
elif colID in (6, 77):
dtype_dict['Ewe/V'] = '<f4'
# Can't see any difference between 7 and 23
elif colID in (7, 23):
dtype_dict['dQ/mA.h'] = '<f8'
# 76 is <I>, 8 is either I or <I> ??
elif colID in (8, 76):
dtype_dict['I/mA'] = '<f4'
elif colID == 9:
dtype_dict['Ece/V'] = '<f4'
elif colID == 11:
dtype_dict['I/mA'] = '<f8'
elif colID == 13:
dtype_dict['(Q-Qo)/mA.h'] = '<f8'
elif colID == 19:
dtype_dict['control/V'] = '<f4'
elif colID == 20:
dtype_dict['control/mA'] = '<f4'
elif colID == 24:
dtype_dict['cycle number'] = '<f8'
elif colID == 32:
dtype_dict['freq/Hz'] = '<f4'
elif colID == 33:
dtype_dict['|Ewe|/V'] = '<f4'
elif colID == 34:
dtype_dict['|I|/A'] = '<f4'
elif colID == 35:
dtype_dict['Phase(Z)/deg'] = '<f4'
elif colID == 36:
dtype_dict['|Z|/Ohm'] = '<f4'
elif colID == 37:
dtype_dict['Re(Z)/Ohm'] = '<f4'
elif colID == 38:
dtype_dict['-Im(Z)/Ohm'] = '<f4'
elif colID == 39:
dtype_dict['I Range'] = '<u2'
elif colID == 70:
dtype_dict['P/W'] = '<f4'
elif colID == 123:
dtype_dict['Energy charge/W.h'] = '<f8'
elif colID == 124:
dtype_dict['Energy discharge/W.h'] = '<f8'
elif colID == 125:
dtype_dict['Capacitance charge/µF'] = '<f8'
elif colID == 126:
dtype_dict['Capacitance discharge/µF'] = '<f8'
elif colID == 131:
dtype_dict['Ns'] = '<u2'
elif colID == 169:
dtype_dict['Cs/µF'] = '<f4'
elif colID == 172:
dtype_dict['Cp/µF'] = '<f4'
elif colID == 434:
dtype_dict['(Q-Qo)/C'] = '<f4'
elif colID == 435:
dtype_dict['dQ/C'] = '<f4'
elif colID == 467:
dtype_dict['Q charge/discharge/mA.h'] = '<f8'
elif colID == 468:
dtype_dict['half cycle'] = '<u4'
elif colID == 473:
dtype_dict['THD Ewe/%'] = '<f4'
elif colID == 476:
dtype_dict['NSD Ewe/%'] = '<f4'
elif colID == 479:
dtype_dict['NSR Ewe/%'] = '<f4'
elif colID == 474:
dtype_dict['THD I/%'] = '<f4'
elif colID == 477:
dtype_dict['NSD I/%'] = '<f4'
elif colID == 480:
dtype_dict['NSR I/%'] = '<f4'
unique_field_name = field_name
type_list.append((unique_field_name, field_type))
else:
print(dtype_dict)
raise NotImplementedError("column type %d not implemented" % colID)
return np.dtype(list(dtype_dict.items())), flags_dict, flags2_dict
raise NotImplementedError("Column ID {cid} after column {prev} "
"is unknown"
.format(cid=colID,
prev=type_list[-1][0]))
return np.dtype(type_list), flags_dict
def read_VMP_modules(fileobj, read_module_data=True):
@@ -259,13 +325,14 @@ def read_VMP_modules(fileobj, read_module_data=True):
if len(module_magic) == 0: # end of file
break
elif module_magic != b'MODULE':
raise ValueError("Found %r, expecting start of new VMP MODULE" % module_magic)
raise ValueError("Found %r, expecting start of new VMP MODULE"
% module_magic)
hdr_bytes = fileobj.read(VMPmodule_hdr.itemsize)
if len(hdr_bytes) < VMPmodule_hdr.itemsize:
raise IOError("Unexpected end of file while reading module header")
hdr = np.fromstring(hdr_bytes, dtype=VMPmodule_hdr, count=1)
hdr = np.frombuffer(hdr_bytes, dtype=VMPmodule_hdr, count=1)
hdr_dict = dict(((n, hdr[n][0]) for n in VMPmodule_hdr.names))
hdr_dict['offset'] = fileobj.tell()
if read_module_data:
@@ -283,6 +350,9 @@ def read_VMP_modules(fileobj, read_module_data=True):
fileobj.seek(hdr_dict['offset'] + hdr_dict['length'], SEEK_SET)
MPR_MAGIC = b'BIO-LOGIC MODULAR FILE\x1a'.ljust(48) + b'\x00\x00\x00\x00'
class MPRfile:
"""Bio-Logic .mpr file
@@ -300,74 +370,81 @@ class MPRfile:
"""
def __init__(self, file_or_path):
self.loop_index = None
if isinstance(file_or_path, str):
mpr_file = open(file_or_path, 'rb')
else:
mpr_file = file_or_path
mpr_magic = b'BIO-LOGIC MODULAR FILE\x1a \x00\x00\x00\x00'
magic = mpr_file.read(len(mpr_magic))
if magic != mpr_magic:
magic = mpr_file.read(len(MPR_MAGIC))
if magic != MPR_MAGIC:
raise ValueError('Invalid magic for .mpr file: %s' % magic)
modules = list(read_VMP_modules(mpr_file))
self.modules = modules
settings_mod, = (m for m in modules if m['shortname'] == b'VMP Set ')
data_module, = (m for m in modules if m['shortname'] == b'VMP data ')
maybe_loop_module = [m for m in modules if m['shortname'] == b'VMP loop ']
maybe_log_module = [m for m in modules if m['shortname'] == b'VMP LOG ']
n_data_points = np.fromstring(data_module['data'][:4], dtype='<u4')
n_columns = np.fromstring(data_module['data'][4:5], dtype='u1')
n_columns = np.asscalar(n_columns) # Compatibility with recent numpy
n_data_points = np.frombuffer(data_module['data'][:4], dtype='<u4')
n_columns = np.frombuffer(data_module['data'][4:5], dtype='u1').item()
if data_module['version'] == 0:
column_types = np.fromstring(data_module['data'][5:], dtype='u1',
column_types = np.frombuffer(data_module['data'][5:], dtype='u1',
count=n_columns)
remaining_headers = data_module['data'][5 + n_columns:100]
main_data = data_module['data'][100:]
elif data_module['version'] == 2:
column_types = np.fromstring(data_module['data'][5:], dtype='<u2',
elif data_module['version'] in [2, 3]:
column_types = np.frombuffer(data_module['data'][5:], dtype='<u2',
count=n_columns)
## There is 405 bytes of data before the main array starts
# There are bytes of data before the main array starts
if data_module['version'] == 3:
num_bytes_before = 406 # version 3 added `\x01` to the start
else:
num_bytes_before = 405
remaining_headers = data_module['data'][5 + 2 * n_columns:405]
main_data = data_module['data'][405:]
main_data = data_module['data'][num_bytes_before:]
else:
raise ValueError("Unrecognised version for data module: %d" %
data_module['version'])
if sys.version_info.major <= 2:
assert(all((b == '\x00' for b in remaining_headers)))
else:
assert(not any(remaining_headers))
self.dtype, self.flags_dict, self.flags2_dict = VMPdata_dtype_from_colIDs(column_types)
self.data = np.fromstring(main_data, dtype=self.dtype)
self.dtype, self.flags_dict = VMPdata_dtype_from_colIDs(column_types)
self.data = np.frombuffer(main_data, dtype=self.dtype)
assert(self.data.shape[0] == n_data_points)
## No idea what these 'column types' mean or even if they are actually
## column types at all
# No idea what these 'column types' mean or even if they are actually
# column types at all
self.version = int(data_module['version'])
self.cols = column_types
self.npts = n_data_points
self.startdate = parse_BioLogic_date(settings_mod['date'])
tm = time.strptime(str3(settings_mod['date']), '%m/%d/%y')
self.startdate = date(tm.tm_year, tm.tm_mon, tm.tm_mday)
if maybe_loop_module:
loop_module, = maybe_loop_module
if loop_module['version'] == 0:
self.loop_index = np.fromstring(loop_module['data'][4:],
dtype='<u4')
self.loop_index = np.trim_zeros(self.loop_index, 'b')
else:
raise ValueError("Unrecognised version for data module: %d" %
data_module['version'])
if maybe_log_module:
log_module, = maybe_log_module
tm = time.strptime(str3(log_module['date']), '%m/%d/%y')
self.enddate = date(tm.tm_year, tm.tm_mon, tm.tm_mday)
self.enddate = parse_BioLogic_date(log_module['date'])
## There is a timestamp at either 465 or 469 bytes
## I can't find any reason why it is one or the other in any
## given file
ole_timestamp1 = np.fromstring(log_module['data'][465:],
# There is a timestamp at either 465 or 469 bytes
# I can't find any reason why it is one or the other in any
# given file
ole_timestamp1 = np.frombuffer(log_module['data'][465:],
dtype='<f8', count=1)
ole_timestamp2 = np.fromstring(log_module['data'][469:],
ole_timestamp2 = np.frombuffer(log_module['data'][469:],
dtype='<f8', count=1)
ole_timestamp3 = np.fromstring(log_module['data'][473:],
ole_timestamp3 = np.frombuffer(log_module['data'][473:],
dtype='<f8', count=1)
ole_timestamp4 = np.fromstring(log_module['data'][585:],
ole_timestamp4 = np.frombuffer(log_module['data'][585:],
dtype='<f8', count=1)
if ole_timestamp1 > 40000 and ole_timestamp1 < 50000:
@@ -386,17 +463,14 @@ class MPRfile:
ole_timedelta = timedelta(days=ole_timestamp[0])
self.timestamp = ole_base + ole_timedelta
if self.startdate != self.timestamp.date():
raise ValueError("""Date mismatch:
Start date: %s
End date: %s
Timestamp: %s""" % (self.startdate, self.enddate, self.timestamp))
raise ValueError("Date mismatch:\n"
+ " Start date: %s\n" % self.startdate
+ " End date: %s\n" % self.enddate
+ " Timestamp: %s\n" % self.timestamp)
def get_flag(self, flagname):
if flagname in self.flags_dict:
mask, dtype = self.flags_dict[flagname]
return np.array(self.data['flags'] & mask, dtype=dtype)
elif flagname in self.flags2_dict:
mask, dtype = self.flags2_dict[flagname]
return np.array(self.data['flags2'] & mask, dtype=dtype)
else:
raise AttributeError("Flag '%s' not present" % flagname)

View File

@@ -1 +1,7 @@
from .BioLogic import MPTfile, MPRfile
# SPDX-FileCopyrightText: 2014-2019 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
from .BioLogic import MPRfile, MPTfile
__all__ = ['MPRfile', 'MPTfile']

View File

@@ -1,30 +1,59 @@
#!/usr/bin/python
# SPDX-FileCopyrightText: 2013-2020 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
import subprocess as sp
import sqlite3
import re
import csv
import argparse
from copy import copy
## The following scripts are adapted from the result of running
## $ mdb-schema <result.res> oracle
# The following scripts are adapted from the result of running
# $ mdb-schema <result.res> oracle
mdb_tables = ["Version_Table", "Global_Table", "Resume_Table",
"Channel_Normal_Table", "Channel_Statistic_Table",
"Auxiliary_Table", "Event_Table",
"Smart_Battery_Info_Table", "Smart_Battery_Data_Table"]
mdb_tables_text = ["Version_Table", "Global_Table", "Event_Table",
"Smart_Battery_Info_Table"]
mdb_tables_numeric = ["Resume_Table", "Channel_Normal_Table",
"Channel_Statistic_Table", "Auxiliary_Table",
"Smart_Battery_Data_Table", 'MCell_Aci_Data_Table',
mdb_tables = [
'Version_Table',
'Global_Table',
'Resume_Table',
'Channel_Normal_Table',
'Channel_Statistic_Table',
'Auxiliary_Table',
'Event_Table',
'Smart_Battery_Info_Table',
'Smart_Battery_Data_Table',
]
mdb_5_23_tables = [
'MCell_Aci_Data_Table',
'Aux_Global_Data_Table',
'Smart_Battery_Clock_Stretch_Table']
'Smart_Battery_Clock_Stretch_Table',
]
mdb_5_26_tables = [
'Can_BMS_Info_Table',
'Can_BMS_Data_Table',
]
mdb_5_23_tables = ['MCell_Aci_Data_Table', 'Aux_Global_Data_Table',
'Smart_Battery_Clock_Stretch_Table']
mdb_tables_text = {
'Version_Table',
'Global_Table',
'Event_Table',
'Smart_Battery_Info_Table',
'Can_BMS_Info_Table',
}
mdb_tables_numeric = {
'Resume_Table',
'Channel_Normal_Table',
'Channel_Statistic_Table',
'Auxiliary_Table',
'Smart_Battery_Data_Table',
'MCell_Aci_Data_Table',
'Aux_Global_Data_Table',
'Smart_Battery_Clock_Stretch_Table',
'Can_BMS_Data_Table',
}
mdb_create_scripts = {
"Version_Table": """
@@ -56,8 +85,17 @@ CREATE TABLE Global_Table
Log_Aux_Data_Flag INTEGER,
Log_Event_Data_Flag INTEGER,
Log_Smart_Battery_Data_Flag INTEGER,
-- The following items are in 5.26 but not in 5.23
Log_Can_BMS_Data_Flag INTEGER DEFAULT NULL,
Software_Version TEXT DEFAULT NULL,
Serial_Number TEXT DEFAULT NULL,
Schedule_Version TEXT DEFAULT NULL,
MASS REAL DEFAULT NULL,
Specific_Capacity REAL DEFAULT NULL,
Capacity REAL DEFAULT NULL,
-- Item_ID exists in all versions
Item_ID TEXT,
-- Version 1.14 ends here, version 5.23 continues
-- These items are in 5.26 and 5.23 but not in 1.14
Mapped_Aux_Conc_CNumber INTEGER DEFAULT NULL,
Mapped_Aux_DI_CNumber INTEGER DEFAULT NULL,
Mapped_Aux_DO_CNumber INTEGER DEFAULT NULL
@@ -65,7 +103,7 @@ CREATE TABLE Global_Table
"Resume_Table": """
CREATE TABLE Resume_Table
(
Test_ID INTEGER REFERENCES Global_Table(Test_ID),
Test_ID INTEGER PRIMARY KEY REFERENCES Global_Table(Test_ID),
Step_Index INTEGER,
Cycle_Index INTEGER,
Channel_Status INTEGER,
@@ -115,7 +153,8 @@ CREATE TABLE Channel_Normal_Table
"dV/dt" REAL,
Internal_Resistance REAL,
AC_Impedance REAL,
ACI_Phase_Angle REAL
ACI_Phase_Angle REAL,
PRIMARY KEY (Test_ID, Data_Point)
); """,
"Channel_Statistic_Table": """
CREATE TABLE Channel_Statistic_Table
@@ -126,7 +165,9 @@ CREATE TABLE Channel_Statistic_Table
-- Version 1.14 ends here, version 5.23 continues
Charge_Time REAL DEFAULT NULL,
Discharge_Time REAL DEFAULT NULL,
FOREIGN KEY (Test_ID, Data_Point) REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
PRIMARY KEY (Test_ID, Data_Point),
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
); """,
"Auxiliary_Table": """
CREATE TABLE Auxiliary_Table
@@ -137,7 +178,9 @@ CREATE TABLE Auxiliary_Table
Data_Type INTEGER,
X REAL,
"dX/dt" REAL,
FOREIGN KEY (Test_ID, Data_Point) REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
PRIMARY KEY (Test_ID, Data_Point, Auxiliary_Index),
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
); """,
"Event_Table": """
CREATE TABLE Event_Table
@@ -151,7 +194,7 @@ CREATE TABLE Event_Table
"Smart_Battery_Info_Table": """
CREATE TABLE Smart_Battery_Info_Table
(
Test_ID INTEGER REFERENCES Global_Table(Test_ID),
Test_ID INTEGER PRIMARY KEY REFERENCES Global_Table(Test_ID),
ManufacturerDate REAL,
ManufacturerAccess TEXT,
SpecificationInfo TEXT,
@@ -220,9 +263,14 @@ CREATE TABLE Smart_Battery_Data_Table
ChargingCurrent REAL DEFAULT NULL,
ChargingVoltage REAL DEFAULT NULL,
ManufacturerData REAL DEFAULT NULL,
FOREIGN KEY (Test_ID, Data_Point) REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
-- Version 5.23 ends here, version 5.26 continues
BATMAN_Status INTEGER DEFAULT NULL,
DTM_PDM_Status INTEGER DEFAULT NULL,
PRIMARY KEY (Test_ID, Data_Point),
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
); """,
## The following tables are not present in version 1.14
# The following tables are not present in version 1.14, but are in 5.23
'MCell_Aci_Data_Table': """
CREATE TABLE MCell_Aci_Data_Table
(
@@ -233,7 +281,9 @@ CREATE TABLE MCell_Aci_Data_Table
Phase_Shift REAL,
Voltage REAL,
Current REAL,
FOREIGN KEY (Test_ID, Data_Point) REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
PRIMARY KEY (Test_ID, Data_Point, Cell_Index),
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
);""",
'Aux_Global_Data_Table': """
CREATE TABLE Aux_Global_Data_Table
@@ -242,7 +292,8 @@ CREATE TABLE Aux_Global_Data_Table
Auxiliary_Index INTEGER,
Data_Type INTEGER,
Nickname TEXT,
Unit TEXT
Unit TEXT,
PRIMARY KEY (Channel_Index, Auxiliary_Index, Data_Type)
);""",
'Smart_Battery_Clock_Stretch_Table': """
CREATE TABLE Smart_Battery_Clock_Stretch_Table
@@ -288,8 +339,32 @@ CREATE TABLE Smart_Battery_Clock_Stretch_Table
VCELL3 INTEGER,
VCELL2 INTEGER,
VCELL1 INTEGER,
FOREIGN KEY (Test_ID, Data_Point) REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
);"""}
PRIMARY KEY (Test_ID, Data_Point),
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
);""",
# The following tables are not present in version 5.23, but are in 5.26
'Can_BMS_Info_Table': """
CREATE TABLE "Can_BMS_Info_Table"
(
Channel_Index INTEGER PRIMARY KEY,
CAN_Cfg_File_Name TEXT,
CAN_Configuration TEXT
);
""",
'Can_BMS_Data_Table': """
CREATE TABLE "Can_BMS_Data_Table"
(
Test_ID INTEGER,
Data_Point INTEGER,
CAN_MV_Index INTEGER,
Signal_Value_X REAL,
PRIMARY KEY (Test_ID, Data_Point, CAN_MV_Index),
FOREIGN KEY (Test_ID, Data_Point)
REFERENCES Channel_Normal_Table (Test_ID, Data_Point)
);
""",
}
mdb_create_indices = {
"Channel_Normal_Table": """
@@ -306,11 +381,14 @@ CREATE TEMPORARY TABLE capacity_helper(
Discharge_Capacity REAL NOT NULL,
Charge_Energy REAL NOT NULL,
Discharge_Energy REAL NOT NULL,
FOREIGN KEY (Test_ID, Cycle_Index) REFERENCES Channel_Normal_Table (Test_ID, Cycle_Index)
FOREIGN KEY (Test_ID, Cycle_Index)
REFERENCES Channel_Normal_Table (Test_ID, Cycle_Index)
);
INSERT INTO capacity_helper
SELECT Test_ID, Cycle_Index, max(Charge_Capacity), max(Discharge_Capacity), max(Charge_Energy), max(Discharge_Energy)
SELECT Test_ID, Cycle_Index,
max(Charge_Capacity), max(Discharge_Capacity),
max(Charge_Energy), max(Discharge_Energy)
FROM Channel_Normal_Table
GROUP BY Test_ID, Cycle_Index;
@@ -328,11 +406,14 @@ CREATE TABLE Capacity_Sum_Table(
Discharge_Capacity_Sum REAL NOT NULL,
Charge_Energy_Sum REAL NOT NULL,
Discharge_Energy_Sum REAL NOT NULL,
FOREIGN KEY (Test_ID, Cycle_Index) REFERENCES Channel_Normal_Table (Test_ID, Cycle_Index)
FOREIGN KEY (Test_ID, Cycle_Index)
REFERENCES Channel_Normal_Table (Test_ID, Cycle_Index)
);
INSERT INTO Capacity_Sum_Table
SELECT a.Test_ID, a.Cycle_Index, total(b.Charge_Capacity), total(b.Discharge_Capacity), total(b.Charge_Energy), total(b.Discharge_Energy)
SELECT a.Test_ID, a.Cycle_Index,
total(b.Charge_Capacity), total(b.Discharge_Capacity),
total(b.Charge_Energy), total(b.Discharge_Energy)
FROM capacity_helper AS a LEFT JOIN capacity_helper AS b
ON (a.Test_ID = b.Test_ID AND a.Cycle_Index > b.Cycle_Index)
GROUP BY a.Test_ID, a.Cycle_Index;
@@ -342,54 +423,79 @@ DROP TABLE capacity_helper;
CREATE VIEW IF NOT EXISTS Capacity_View
AS SELECT Test_ID, Data_Point, Test_Time, Step_Time, DateTime,
Step_Index, Cycle_Index, Current, Voltage, "dV/dt",
Discharge_Capacity + Discharge_Capacity_Sum - Charge_Capacity - Charge_Capacity_Sum AS Net_Capacity,
Discharge_Capacity + Discharge_Capacity_Sum + Charge_Capacity + Charge_Capacity_Sum AS Gross_Capacity,
Discharge_Energy + Discharge_Energy_Sum - Charge_Energy - Charge_Energy_Sum AS Net_Energy,
Discharge_Energy + Discharge_Energy_Sum + Charge_Energy + Charge_Energy_Sum AS Gross_Energy
( (Discharge_Capacity + Discharge_Capacity_Sum)
- (Charge_Capacity + Charge_Capacity_Sum) ) AS Net_Capacity,
( (Discharge_Capacity + Discharge_Capacity_Sum)
+ (Charge_Capacity + Charge_Capacity_Sum) ) AS Gross_Capacity,
( (Discharge_Energy + Discharge_Energy_Sum)
- (Charge_Energy + Charge_Energy_Sum) ) AS Net_Energy,
( (Discharge_Energy + Discharge_Energy_Sum)
+ (Charge_Energy + Charge_Energy_Sum) ) AS Gross_Energy
FROM Channel_Normal_Table NATURAL JOIN Capacity_Sum_Table;
"""
def mdb_get_data_text(s3db, filename, table):
print("Reading %s..." % table)
insert_pattern = re.compile(
r'INSERT INTO "\w+" \([^)]+?\) VALUES \(("[^"]*"|[^")])+?\);\n',
re.IGNORECASE
)
try:
mdb_sql = sp.Popen(['mdb-export', '-I', 'postgres', filename, table],
bufsize=-1, stdin=None, stdout=sp.PIPE,
universal_newlines=True)
# Initialize values to avoid NameError in except clause
mdb_output = ''
insert_match = None
with sp.Popen(['mdb-export', '-I', 'postgres', filename, table],
bufsize=-1, stdin=sp.DEVNULL, stdout=sp.PIPE,
universal_newlines=True) as mdb_sql:
mdb_output = mdb_sql.stdout.read()
while len(mdb_output) > 0:
insert_match = re.match(r'INSERT INTO "\w+" \([^)]+?\) VALUES \(("[^"]*"|[^")])+?\);\n',
mdb_output, re.IGNORECASE)
insert_match = insert_pattern.match(mdb_output)
s3db.execute(insert_match.group())
mdb_output = mdb_output[insert_match.end():]
mdb_output += mdb_sql.stdout.read()
s3db.commit()
except:
except OSError as e:
if e.errno == 2:
raise RuntimeError('Could not locate the `mdb-export` executable. '
'Check that mdbtools is properly installed.')
else:
raise
except BaseException:
print("Error while importing %s" % table)
if mdb_output:
print("Remaining mdb-export output:", mdb_output)
if insert_match:
print("insert_re match:", insert_match)
raise
finally:
mdb_sql.terminate()
def mdb_get_data_numeric(s3db, filename, table):
print("Reading %s..." % table)
try:
mdb_sql = sp.Popen(['mdb-export', filename, table],
bufsize=-1, stdin=None, stdout=sp.PIPE,
universal_newlines=True)
with sp.Popen(['mdb-export', filename, table],
bufsize=-1, stdin=sp.DEVNULL, stdout=sp.PIPE,
universal_newlines=True) as mdb_sql:
mdb_csv = csv.reader(mdb_sql.stdout)
mdb_headers = next(mdb_csv)
quoted_headers = ['"%s"' % h for h in mdb_headers]
joined_headers = ', '.join(quoted_headers)
joined_placemarks = ', '.join(['?' for h in mdb_headers])
insert_stmt = 'INSERT INTO "{0}" ({1}) VALUES ({2});'.format(table,
joined_headers, joined_placemarks)
insert_stmt = 'INSERT INTO "{0}" ({1}) VALUES ({2});'.format(
table,
joined_headers,
joined_placemarks,
)
s3db.executemany(insert_stmt, mdb_csv)
s3db.commit()
finally:
mdb_sql.terminate()
except OSError as e:
if e.errno == 2:
raise RuntimeError('Could not locate the `mdb-export` executable. '
'Check that mdbtools is properly installed.')
else:
raise
def mdb_get_data(s3db, filename, table):
@@ -401,31 +507,69 @@ def mdb_get_data(s3db, filename, table):
raise ValueError("'%s' is in neither mdb_tables_text nor mdb_tables_numeric" % table)
def mdb_get_version(filename):
"""Get the version number from an Arbin .res file.
Reads the Version_Table and parses the version from Version_Schema_Field.
"""
print("Reading version number...")
try:
with sp.Popen(['mdb-export', filename, 'Version_Table'],
bufsize=-1, stdin=sp.DEVNULL, stdout=sp.PIPE,
universal_newlines=True) as mdb_sql:
mdb_csv = csv.reader(mdb_sql.stdout)
mdb_headers = next(mdb_csv)
mdb_values = next(mdb_csv)
try:
next(mdb_csv)
except StopIteration:
pass
else:
raise ValueError('Version_Table of %s lists multiple versions' % filename)
except OSError as e:
if e.errno == 2:
raise RuntimeError('Could not locate the `mdb-export` executable. '
'Check that mdbtools is properly installed.')
else:
raise
if 'Version_Schema_Field' not in mdb_headers:
raise ValueError('Version_Table of %s does not contain a Version_Schema_Field column'
% filename)
version_fields = dict(zip(mdb_headers, mdb_values))
version_text = version_fields['Version_Schema_Field']
version_match = re.fullmatch('Results File ([.0-9]+)', version_text)
if not version_match:
raise ValueError('File version "%s" did not match expected format' % version_text)
version_string = version_match.group(1)
version_tuple = tuple(map(int, version_string.split('.')))
return version_tuple
def convert_arbin_to_sqlite(input_file, output_file):
"""Read data from an Arbin .res data file and write to a sqlite file.
Any data currently in the sqlite file will be erased!
"""
arbin_version = mdb_get_version(input_file)
s3db = sqlite3.connect(output_file)
tables_to_convert = copy(mdb_tables)
if arbin_version >= (5, 23):
tables_to_convert.extend(mdb_5_23_tables)
if arbin_version >= (5, 26):
tables_to_convert.extend(mdb_5_26_tables)
for table in reversed(mdb_tables + mdb_5_23_tables):
for table in reversed(tables_to_convert):
s3db.execute('DROP TABLE IF EXISTS "%s";' % table)
for table in mdb_tables:
for table in tables_to_convert:
s3db.executescript(mdb_create_scripts[table])
mdb_get_data(s3db, input_file, table)
if table in mdb_create_indices:
print("Creating indices for %s..." % table)
s3db.executescript(mdb_create_indices[table])
if (s3db.execute("SELECT Version_Schema_Field FROM Version_Table;").fetchone()[0] == "Results File 5.23"):
for table in mdb_5_23_tables:
s3db.executescript(mdb_create_scripts[table])
mdb_get_data(input_file, table)
if table in mdb_create_indices:
s3db.executescript(mdb_create_indices[table])
print("Creating helper table for capacity and energy totals...")
s3db.executescript(helper_table_script)
@@ -434,7 +578,9 @@ def convert_arbin_to_sqlite(input_file, output_file):
def main(argv=None):
parser = argparse.ArgumentParser(description="Convert Arbin .res files to sqlite3 databases using mdb-export")
parser = argparse.ArgumentParser(
description="Convert Arbin .res files to sqlite3 databases using mdb-export",
)
parser.add_argument('input_file', type=str) # need file name to pass to sp.Popen
parser.add_argument('output_file', type=str) # need file name to pass to sqlite3.connect

View File

@@ -1,5 +1,9 @@
#!/bin/sh
# SPDX-FileCopyrightText: 2014-2020 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
## Test data are posted on FigShare, listed in this article
# http://figshare.com/articles/galvani_test_data/1228760
@@ -7,20 +11,22 @@ mkdir -p tests/testdata
cd tests/testdata
/usr/bin/wget --continue -i - <<END_FILELIST
http://files.figshare.com/1778905/arbin1.res
http://files.figshare.com/1778937/bio_logic2.mpt
http://files.figshare.com/1778938/bio_logic5.mpt
http://files.figshare.com/1778939/bio_logic1.mpr
http://files.figshare.com/1778940/bio_logic6.mpr
http://files.figshare.com/1778941/bio_logic4.mpt
http://files.figshare.com/1778942/bio_logic5.mpr
http://files.figshare.com/1778943/bio_logic2.mpr
http://files.figshare.com/1778944/bio_logic6.mpt
http://files.figshare.com/1778945/bio_logic1.mpt
http://files.figshare.com/1778946/bio_logic3.mpr
http://files.figshare.com/1780444/bio_logic4.mpr
http://files.figshare.com/1780529/121_CA_455nm_6V_30min_C01.mpr
http://files.figshare.com/1780530/121_CA_455nm_6V_30min_C01.mpt
http://files.figshare.com/1780526/CV_C01.mpr
http://files.figshare.com/1780527/CV_C01.mpt
https://files.figshare.com/1778905/arbin1.res
https://files.figshare.com/1778937/bio_logic2.mpt
https://files.figshare.com/1778938/bio_logic5.mpt
https://files.figshare.com/1778939/bio_logic1.mpr
https://files.figshare.com/1778940/bio_logic6.mpr
https://files.figshare.com/1778941/bio_logic4.mpt
https://files.figshare.com/1778942/bio_logic5.mpr
https://files.figshare.com/1778943/bio_logic2.mpr
https://files.figshare.com/1778944/bio_logic6.mpt
https://files.figshare.com/1778945/bio_logic1.mpt
https://files.figshare.com/1778946/bio_logic3.mpr
https://files.figshare.com/1780444/bio_logic4.mpr
https://files.figshare.com/1780529/121_CA_455nm_6V_30min_C01.mpr
https://files.figshare.com/1780530/121_CA_455nm_6V_30min_C01.mpt
https://files.figshare.com/1780526/CV_C01.mpr
https://files.figshare.com/1780527/CV_C01.mpt
https://files.figshare.com/14752538/C019P-0ppb-A_C01.mpr
https://files.figshare.com/25331510/UM34_Test005E.res
END_FILELIST

View File

@@ -1 +1,3 @@
# SPDX-FileCopyrightText: 2017 Christopher Kerr <chris.kerr@mykolab.ch>
# SPDX-License-Identifier: CC0-1.0
numpy

8
setup.cfg Normal file
View File

@@ -0,0 +1,8 @@
# SPDX-FileCopyrightText: 2021 Christopher Kerr
# SPDX-License-Identifier: CC0-1.0
[metadata]
# N.B. The MIT-licensed CodeQL file and the CC0-licensed
# config files are not included in the .whl package so
# their licenses do not need to be packaged either.
license_files =
LICENSES/GPL-3.0-or-later.txt

View File

@@ -1,4 +1,7 @@
# -*- coding: utf-8 -*-
# SPDX-FileCopyrightText: 2014-2020 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
import os.path
@@ -9,11 +12,11 @@ with open(os.path.join(os.path.dirname(__file__), 'README.md')) as f:
setup(
name='galvani',
version='0.0.1',
version='0.2.1',
description='Open and process battery charger log data files',
long_description=readme,
long_description_content_type="text/markdown",
url='https://github.com/chatcannon/galvani',
url='https://github.com/echemdata/galvani',
author='Chris Kerr',
author_email='chris.kerr@mykolab.ch',
license='GPLv3+',
@@ -23,10 +26,16 @@ setup(
'Intended Audience :: Science/Research',
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)',
'Natural Language :: English',
'Programming Language :: Python :: 3 :: Only',
'Topic :: Scientific/Engineering :: Chemistry',
],
packages=['galvani'],
entry_points={'console_scripts': [
entry_points={
'console_scripts': [
'res2sqlite = galvani.res2sqlite:main',
]},
],
},
python_requires='>=3.6',
install_requires=['numpy'],
tests_require=['pytest'],
)

15
tests/conftest.py Normal file
View File

@@ -0,0 +1,15 @@
"""Helpers for pytest tests."""
# SPDX-FileCopyrightText: 2019 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import pytest
@pytest.fixture(scope='session')
def testdata_dir():
"""Path to the testdata directory."""
return os.path.join(os.path.dirname(__file__), 'testdata')

60
tests/test_Arbin.py Normal file
View File

@@ -0,0 +1,60 @@
"""Tests for loading Arbin .res files."""
# SPDX-FileCopyrightText: 2019-2020 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import sqlite3
import subprocess
import pytest
from galvani import res2sqlite
have_mdbtools = (subprocess.call(['which', 'mdb-export'],
stdout=subprocess.DEVNULL) == 0)
def test_res2sqlite_help():
"""Test running `res2sqlite --help`.
This should work even when mdbtools is not installed.
"""
help_output = subprocess.check_output(['res2sqlite', '--help'])
assert b'Convert Arbin .res files to sqlite3 databases' in help_output
@pytest.mark.skipif(have_mdbtools, reason='This tests the failure when mdbtools is not installed')
def test_convert_Arbin_no_mdbtools(testdata_dir, tmpdir):
"""Checks that the conversion fails with an appropriate error message."""
res_file = os.path.join(testdata_dir, 'arbin1.res')
sqlite_file = os.path.join(str(tmpdir), 'arbin1.s3db')
with pytest.raises(RuntimeError, match="Could not locate the `mdb-export` executable."):
res2sqlite.convert_arbin_to_sqlite(res_file, sqlite_file)
@pytest.mark.skipif(not have_mdbtools, reason='Reading the Arbin file requires MDBTools')
@pytest.mark.parametrize('basename', ['arbin1', 'UM34_Test005E'])
def test_convert_Arbin_to_sqlite_function(testdata_dir, tmpdir, basename):
"""Convert an Arbin file to SQLite using the functional interface."""
res_file = os.path.join(testdata_dir, basename + '.res')
sqlite_file = os.path.join(str(tmpdir), basename + '.s3db')
res2sqlite.convert_arbin_to_sqlite(res_file, sqlite_file)
assert os.path.isfile(sqlite_file)
with sqlite3.connect(sqlite_file) as conn:
csr = conn.execute('SELECT * FROM Channel_Normal_Table;')
csr.fetchone()
@pytest.mark.skipif(not have_mdbtools, reason='Reading the Arbin file requires MDBTools')
def test_convert_cmdline(testdata_dir, tmpdir):
"""Checks that the conversion fails with an appropriate error message."""
res_file = os.path.join(testdata_dir, 'arbin1.res')
sqlite_file = os.path.join(str(tmpdir), 'arbin1.s3db')
subprocess.check_call(['res2sqlite', res_file, sqlite_file])
assert os.path.isfile(sqlite_file)
with sqlite3.connect(sqlite_file) as conn:
csr = conn.execute('SELECT * FROM Channel_Normal_Table;')
csr.fetchone()

View File

@@ -1,99 +1,132 @@
# -*- coding: utf-8 -*-
# SPDX-FileCopyrightText: 2013-2020 Christopher Kerr <chris.kerr@mykolab.ch>
#
# SPDX-License-Identifier: GPL-3.0-or-later
import os.path
import re
from datetime import date, datetime
import numpy as np
from numpy.testing import assert_array_almost_equal, assert_array_equal
from nose.tools import ok_, eq_, raises
import pytest
from galvani import MPTfile, MPRfile
from galvani.BioLogic import MPTfileCSV, str3 # not exported
testdata_dir = os.path.join(os.path.dirname(__file__), 'testdata')
from galvani import BioLogic, MPTfile, MPRfile
from galvani.BioLogic import MPTfileCSV # not exported
def test_open_MPT():
def test_open_MPT(testdata_dir):
mpt1, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic1.mpt'))
eq_(comments, [])
eq_(mpt1.dtype.names, ("mode", "ox/red", "error", "control changes",
"Ns changes", "counter inc.", "time/s",
"control/V/mA", "Ewe/V", "dQ/mA.h", "P/W",
"I/mA", "(Q-Qo)/mA.h", "x"))
assert comments == []
assert mpt1.dtype.names == (
"mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V/mA", "Ewe/V", "dQ/mA.h", "P/W",
"I/mA", "(Q-Qo)/mA.h", "x",
)
@raises(ValueError)
def test_open_MPT_fails_for_bad_file():
mpt1 = MPTfile(os.path.join(testdata_dir, 'bio_logic1.mpr'))
def test_open_MPT_fails_for_bad_file(testdata_dir):
with pytest.raises(ValueError, match='Bad first line'):
MPTfile(os.path.join(testdata_dir, 'bio_logic1.mpr'))
def test_open_MPT_csv():
def test_open_MPT_csv(testdata_dir):
mpt1, comments = MPTfileCSV(os.path.join(testdata_dir, 'bio_logic1.mpt'))
eq_(comments, [])
eq_(mpt1.fieldnames, ["mode", "ox/red", "error", "control changes",
"Ns changes", "counter inc.", "time/s",
"control/V/mA", "Ewe/V", "dq/mA.h", "P/W",
"<I>/mA", "(Q-Qo)/mA.h", "x"])
assert comments == []
assert mpt1.fieldnames == [
"mode", "ox/red", "error", "control changes", "Ns changes",
"counter inc.", "time/s", "control/V/mA", "Ewe/V", "dq/mA.h", "P/W",
"<I>/mA", "(Q-Qo)/mA.h", "x",
]
@raises(ValueError)
def test_open_MPT_csv_fails_for_bad_file():
mpt1 = MPTfileCSV(os.path.join(testdata_dir, 'bio_logic1.mpr'))
def test_open_MPT_csv_fails_for_bad_file(testdata_dir):
with pytest.raises((ValueError, UnicodeDecodeError)):
MPTfileCSV(os.path.join(testdata_dir, 'bio_logic1.mpr'))
def test_open_MPR1():
mpr1 = MPRfile(os.path.join(testdata_dir, 'bio_logic1.mpr'))
## Check the dates as a basic test that it has been read properly
eq_(mpr1.startdate, date(2011, 10, 29))
eq_(mpr1.enddate, date(2011, 10, 31))
def test_colID_map_uniqueness():
"""Check some uniqueness properties of the VMPdata_colID_xyz maps."""
field_colIDs = set(BioLogic.VMPdata_colID_dtype_map.keys())
flag_colIDs = set(BioLogic.VMPdata_colID_flag_map.keys())
field_names = [v[0] for v in BioLogic.VMPdata_colID_dtype_map.values()]
flag_names = [v[0] for v in BioLogic.VMPdata_colID_flag_map.values()]
assert not field_colIDs.intersection(flag_colIDs)
# 'I/mA' and 'dQ/mA.h' are duplicated
# assert len(set(field_names)) == len(field_names)
assert len(set(flag_names)) == len(flag_names)
assert not set(field_names).intersection(flag_names)
def test_open_MPR2():
mpr2 = MPRfile(os.path.join(testdata_dir, 'bio_logic2.mpr'))
## Check the dates as a basic test that it has been read properly
eq_(mpr2.startdate, date(2012, 9, 27))
eq_(mpr2.enddate, date(2012, 9, 27))
@pytest.mark.parametrize('colIDs, expected', [
([1, 2, 3], [('flags', 'u1')]),
([4, 6], [('time/s', '<f8'), ('Ewe/V', '<f4')]),
([1, 4, 21], [('flags', 'u1'), ('time/s', '<f8')]),
([4, 6, 4], [('time/s', '<f8'), ('Ewe/V', '<f4'), ('time/s 2', '<f8')]),
([4, 9999], NotImplementedError),
])
def test_colID_to_dtype(colIDs, expected):
"""Test converting column ID to numpy dtype."""
if isinstance(expected, type) and issubclass(expected, Exception):
with pytest.raises(expected):
BioLogic.VMPdata_dtype_from_colIDs(colIDs)
return
expected_dtype = np.dtype(expected)
dtype, flags_dict = BioLogic.VMPdata_dtype_from_colIDs(colIDs)
assert dtype == expected_dtype
def test_open_MPR3():
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic3.mpr'))
## Check the dates as a basic test that it has been read properly
eq_(mpr.startdate, date(2013, 3, 27))
eq_(mpr.enddate, date(2013, 3, 27))
@pytest.mark.parametrize('data, expected', [
('02/23/17', date(2017, 2, 23)),
('10-03-05', date(2005, 10, 3)),
('11.12.20', date(2020, 11, 12)),
(b'01/02/03', date(2003, 1, 2)),
('13.08.07', ValueError),
('03-04/05', ValueError),
])
def test_parse_BioLogic_date(data, expected):
"""Test the parse_BioLogic_date function."""
if isinstance(expected, type) and issubclass(expected, Exception):
with pytest.raises(expected):
BioLogic.parse_BioLogic_date(data)
return
result = BioLogic.parse_BioLogic_date(data)
assert result == expected
def test_open_MPR4():
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic4.mpr'))
## Check the dates as a basic test that it has been read properly
eq_(mpr.startdate, date(2011, 11, 1))
eq_(mpr.enddate, date(2011, 11, 2))
@pytest.mark.parametrize('filename, startdate, enddate', [
('bio_logic1.mpr', '2011-10-29', '2011-10-31'),
('bio_logic2.mpr', '2012-09-27', '2012-09-27'),
('bio_logic3.mpr', '2013-03-27', '2013-03-27'),
('bio_logic4.mpr', '2011-11-01', '2011-11-02'),
('bio_logic5.mpr', '2013-01-28', '2013-01-28'),
# bio_logic6.mpr has no end date because it does not have a VMP LOG module
('bio_logic6.mpr', '2012-09-11', None),
# C019P-0ppb-A_C01.mpr stores the date in a different format
('C019P-0ppb-A_C01.mpr', '2019-03-14', '2019-03-14'),
('Rapp_Error.mpr', '2010-12-02', '2010-12-02'),
])
def test_MPR_dates(testdata_dir, filename, startdate, enddate):
"""Check that the start and end dates in .mpr files are read correctly."""
mpr = MPRfile(os.path.join(testdata_dir, filename))
assert mpr.startdate.strftime('%Y-%m-%d') == startdate
if enddate:
assert mpr.enddate.strftime('%Y-%m-%d') == enddate
else:
assert not hasattr(mpr, 'enddate')
def test_open_MPR5():
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic5.mpr'))
## Check the dates as a basic test that it has been read properly
eq_(mpr.startdate, date(2013, 1, 28))
eq_(mpr.enddate, date(2013, 1, 28))
def test_open_MPR6():
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic6.mpr'))
## Check the dates as a basic test that it has been read properly
eq_(mpr.startdate, date(2012, 9, 11))
## no end date because no VMP LOG module
@raises(ValueError)
def test_open_MPR_fails_for_bad_file():
mpr1 = MPRfile(os.path.join(testdata_dir, 'arbin1.res'))
def test_open_MPR_fails_for_bad_file(testdata_dir):
with pytest.raises(ValueError, match='Invalid magic for .mpr file'):
MPRfile(os.path.join(testdata_dir, 'arbin1.res'))
def timestamp_from_comments(comments):
for line in comments:
time_match = re.match(b'Acquisition started on : ([0-9/]+ [0-9:]+)', line)
if time_match:
timestamp = datetime.strptime(str3(time_match.group(1)),
timestamp = datetime.strptime(time_match.group(1).decode('ascii'),
'%m/%d/%Y %H:%M:%S')
return timestamp
raise AttributeError("No timestamp in comments")
@@ -117,7 +150,7 @@ def assert_MPR_matches_MPT(mpr, mpt, comments):
assert_array_equal(mpr.get_flag("control changes"), mpt["control changes"])
if "Ns changes" in mpt.dtype.fields:
assert_array_equal(mpr.get_flag("Ns changes"), mpt["Ns changes"])
## Nothing uses the 0x40 bit of the flags
# Nothing uses the 0x40 bit of the flags
assert_array_equal(mpr.get_flag("counter inc."), mpt["counter inc."])
assert_array_almost_equal(mpr.data["time/s"],
@@ -139,33 +172,34 @@ def assert_MPR_matches_MPT(mpr, mpt, comments):
assert_field_matches("(Q-Qo)/C", decimal=6) # 32 bit float precision
try:
eq_(timestamp_from_comments(comments), mpr.timestamp)
assert timestamp_from_comments(comments) == mpr.timestamp
except AttributeError:
pass
def test_MPR1_matches_MPT1():
mpr1 = MPRfile(os.path.join(testdata_dir, 'bio_logic1.mpr'))
mpt1, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic1.mpt'))
assert_MPR_matches_MPT(mpr1, mpt1, comments)
@pytest.mark.parametrize('basename', [
'bio_logic1',
'bio_logic2',
# No bio_logic3.mpt file
'bio_logic4',
# bio_logic5 and bio_logic6 are special cases
'CV_C01',
'121_CA_455nm_6V_30min_C01',
])
def test_MPR_matches_MPT(testdata_dir, basename):
"""Check the MPR parser against the MPT parser.
Load a binary .mpr file and a text .mpt file which should contain
exactly the same data. Check that the loaded data actually match.
"""
binpath = os.path.join(testdata_dir, basename + '.mpr')
txtpath = os.path.join(testdata_dir, basename + '.mpt')
mpr = MPRfile(binpath)
mpt, comments = MPTfile(txtpath)
assert_MPR_matches_MPT(mpr, mpt, comments)
def test_MPR2_matches_MPT2():
mpr2 = MPRfile(os.path.join(testdata_dir, 'bio_logic2.mpr'))
mpt2, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic2.mpt'))
assert_MPR_matches_MPT(mpr2, mpt2, comments)
## No bio_logic3.mpt file
def test_MPR4_matches_MPT4():
mpr4 = MPRfile(os.path.join(testdata_dir, 'bio_logic4.mpr'))
mpt4, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic4.mpt'))
assert_MPR_matches_MPT(mpr4, mpt4, comments)
def test_MPR5_matches_MPT5():
def test_MPR5_matches_MPT5(testdata_dir):
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic5.mpr'))
mpt, comments = MPTfile((re.sub(b'\tXXX\t', b'\t0\t', line) for line in
open(os.path.join(testdata_dir, 'bio_logic5.mpt'),
@@ -173,23 +207,8 @@ def test_MPR5_matches_MPT5():
assert_MPR_matches_MPT(mpr, mpt, comments)
def test_MPR6_matches_MPT6():
def test_MPR6_matches_MPT6(testdata_dir):
mpr = MPRfile(os.path.join(testdata_dir, 'bio_logic6.mpr'))
mpt, comments = MPTfile(os.path.join(testdata_dir, 'bio_logic6.mpt'))
mpr.data = mpr.data[:958] # .mpt file is incomplete
assert_MPR_matches_MPT(mpr, mpt, comments)
## Tests for issue #1 -- new dtypes ##
def test_CV_C01():
mpr = MPRfile(os.path.join(testdata_dir, 'CV_C01.mpr'))
mpt, comments = MPTfile(os.path.join(testdata_dir, 'CV_C01.mpt'))
assert_MPR_matches_MPT(mpr, mpt, comments)
def test_CA_455nm():
mpr = MPRfile(os.path.join(testdata_dir, '121_CA_455nm_6V_30min_C01.mpr'))
mpt, comments = MPTfile(os.path.join(testdata_dir, '121_CA_455nm_6V_30min_C01.mpt'))
assert_MPR_matches_MPT(mpr, mpt, comments)

18
tox.ini
View File

@@ -1,5 +1,17 @@
# SPDX-FileCopyrightText: 2017-2021 Christopher Kerr <chris.kerr@mykolab.ch>
# SPDX-License-Identifier: GPL-3.0-or-later
[tox]
envlist = py27,py35,py37
envlist = py36,py37,py38,py39
[testenv]
deps=nose
commands=nosetests
deps =
flake8
reuse
pytest
commands =
flake8
reuse lint
pytest
[flake8]
exclude = build,dist,*.egg-info,.cache,.git,.tox,__pycache__
max-line-length = 100