Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement metanetx service #1

Merged
merged 45 commits into from
Jun 25, 2019
Merged

Implement metanetx service #1

merged 45 commits into from
Jun 25, 2019

Conversation

kvikshaug
Copy link
Member

No description provided.

@codecov-io
Copy link

codecov-io commented Jun 12, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@b3757d5). Click here to learn what that means.
The diff coverage is 91.02%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master       #1   +/-   ##
=========================================
  Coverage          ?   84.18%           
=========================================
  Files             ?        8           
  Lines             ?      253           
  Branches          ?        0           
=========================================
  Hits              ?      213           
  Misses            ?       40           
  Partials          ?        0
Impacted Files Coverage Δ
src/metanetx/app.py 90.47% <100%> (ø)
src/metanetx/schemas.py 100% <100%> (ø)
src/metanetx/__init__.py 100% <100%> (ø)
src/metanetx/resources.py 63.63% <50%> (ø)
src/metanetx/data.py 97.11% <97.11%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b3757d5...4db88ba. Read the comment docs.

Copy link
Contributor

@Midnighter Midnighter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good prototype of the service. Next steps for me are:

  1. Persist information in a database (SQLAlchemy + postgres).
  2. Build a more powerful search (Elastic or Lucene or at least Whoosh).
  3. Cache responses using Redis.

stream=True,
)
response.raise_for_status()
for row in csv.DictReader(response.iter_lines(decode_unicode=True), delimiter="\t"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is of course perfectly fine but I would use pandas for anything tabular unless there is a really good reason not to include it as a dependency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants