Merge pull request #257 from GRA0007/refactor/rust-api

Rust API refactor
This commit is contained in:
Benji Grant 2023-05-16 17:07:00 +10:00 committed by GitHub
commit f8724cc704
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
235 changed files with 6258 additions and 4817 deletions

58
.github/workflows/deploy_api.yml vendored Normal file
View file

@ -0,0 +1,58 @@
name: Deploy API
on:
push:
branches: ['main']
paths: ['api/**']
env:
REGISTRY: ghcr.io
jobs:
build-and-push:
name: Build Docker image and push to GitHub Container Registry
runs-on: ubuntu-latest
defaults:
run:
working-directory: api
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v3
- uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/metadata-action@v4
id: meta
with:
images: ${{ env.REGISTRY }}/${{ github.repository }}/api
- uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
deploy:
needs: [build-and-push]
name: Deploy to EC2
runs-on: ubuntu-latest
steps:
- uses: appleboy/ssh-action@master
with:
host: ${{ secrets.EC2_HOST }}
username: ${{ secrets.EC2_USERNAME }}
key: ${{ secrets.EC2_SSH_KEY }}
script: |
docker login ${{ env.REGISTRY }} -u ${{ github.actor }} -p ${{ secrets.GITHUB_TOKEN }}
docker pull ${{ env.REGISTRY }}/${{ github.repository }}/api:latest
docker stop crabfit-api
docker rm crabfit-api
docker run -d -p 3000:3000 --name crabfit-api --env-file ./.env ${{ env.REGISTRY }}/${{ github.repository }}/api:latest

View file

@ -1,37 +0,0 @@
name: Deploy Backend
on:
push:
branches: ['main']
paths: ['crabfit-backend/**']
jobs:
deploy:
runs-on: ubuntu-latest
defaults:
run:
working-directory: crabfit-backend
permissions:
contents: read
id-token: write
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 17
cache: yarn
cache-dependency-path: '**/yarn.lock'
- run: yarn install --immutable
- run: yarn build
- id: auth
uses: google-github-actions/auth@v0
with:
credentials_json: '${{ secrets.GCP_SA_KEY }}'
- id: deploy
uses: google-github-actions/deploy-appengine@v0
with:
working_directory: crabfit-backend
version: v1

View file

@ -3,7 +3,7 @@ name: Deploy Frontend
on: on:
push: push:
branches: ['main'] branches: ['main']
paths: ['crabfit-frontend/**'] paths: ['frontend/**']
jobs: jobs:
deploy: deploy:
@ -11,7 +11,7 @@ jobs:
defaults: defaults:
run: run:
working-directory: crabfit-frontend working-directory: frontend
permissions: permissions:
contents: read contents: read
@ -33,5 +33,5 @@ jobs:
- id: deploy - id: deploy
uses: google-github-actions/deploy-appengine@v0 uses: google-github-actions/deploy-appengine@v0
with: with:
working_directory: crabfit-frontend working_directory: frontend
version: v1 version: v1

1
.gitignore vendored
View file

@ -1,3 +1,2 @@
/graphics /graphics
.DS_Store .DS_Store
/crabfit-browser-extension/*.zip

View file

@ -3,8 +3,6 @@
Align your schedules to find the perfect time that works for everyone. Align your schedules to find the perfect time that works for everyone.
Licensed under the GNU GPLv3. Licensed under the GNU GPLv3.
<a href="https://www.producthunt.com/posts/crab-fit?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-crab-fit" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=291656&theme=light" alt="Crab Fit - Use your availability to find a time that works for everyone | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
## Contributing ## Contributing
### ⭐️ Bugs or feature requests ### ⭐️ Bugs or feature requests
@ -15,16 +13,19 @@ If you find any bugs or have a feature request, please create an issue by <a hre
If you speak a language other than English and you want to help translate Crab Fit, fill out this form: https://forms.gle/azz1yGqhpLUka45S9 If you speak a language other than English and you want to help translate Crab Fit, fill out this form: https://forms.gle/azz1yGqhpLUka45S9
### Pull requests
If you see an issue you want to fix, or want to implement a feature you think would be useful, please feel free to open a pull request with your changes. If you can, please open an issue about the bug or feature you want to work on before starting your PR, to prevent work duplication and give others a chance to improve your idea.
## Setup ## Setup
1. Clone the repo. 1. Clone the repo and ensure you have `node`, `yarn` and `rust` installed on your machine.
2. Run `yarn` in both backend and frontend folders. 2. Run `yarn` in `frontend` folder to install dependencies, then `yarn dev` to start the dev server.
3. Run `yarn dev` in the backend folder to start the API. **Note:** you will need a google cloud app set up with datastore enabled and set your `GOOGLE_APPLICATION_CREDENTIALS` environment variable to your service key path. 3. Run `cargo run` in the `api` folder to start the API.
4. Run `yarn dev` in the frontend folder to start the frontend.
### 🔌 Browser extension ### 🔌 Browser extension
The browser extension in `crabfit-browser-extension` can be tested by first running the frontend, and changing the iframe url in the extension's `popup.html` to match the local Crab Fit. Then it can be loaded as an unpacked extension in Chrome to test. The browser extension in `browser-extension` can be tested by first running the frontend, and changing the iframe url in the extension's `popup.html` to match the local Crab Fit. Then it can be loaded as an unpacked extension in Chrome to test.
## Deploy ## Deploy
@ -34,4 +35,4 @@ To deploy cron jobs (i.e. monthly cleanup of old events), run `gcloud app deploy
### 🔌 Browser extension ### 🔌 Browser extension
Compress everything inside the `crabfit-browser-extension` folder and use that zip to deploy using Chrome web store and Mozilla Add-on store. Compress everything inside the `browser-extension` folder and use that zip to deploy using Chrome web store and Mozilla Add-on store.

2
api/.dockerignore Normal file
View file

@ -0,0 +1,2 @@
target
.env

2
api/.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
target
.env

3973
api/Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

37
api/Cargo.toml Normal file
View file

@ -0,0 +1,37 @@
[package]
name = "crabfit-api"
description = "API for Crab Fit"
license = "GPL-3.0-only"
version = "3.0.0"
edition = "2021"
[features]
sql-adaptor = []
datastore-adaptor = []
[workspace]
members = ["common", "adaptors/*"]
[dependencies]
axum = { version = "0.6.18", features = ["headers"] }
serde = { version = "1.0.162", features = ["derive"] }
tokio = { version = "1.28.0", features = ["macros", "rt-multi-thread"] }
common = { path = "common" }
sql-adaptor = { path = "adaptors/sql" }
datastore-adaptor = { path = "adaptors/datastore" }
memory-adaptor = { path = "adaptors/memory" }
dotenvy = "0.15.7"
serde_json = "1.0.96"
rand = "0.8.5"
punycode = "0.4.1"
regex = "1.8.1"
tracing = "0.1.37"
tracing-subscriber = "0.3.17"
chrono = "0.4.24"
bcrypt = "0.14.0"
tower-http = { version = "0.4.0", features = ["cors", "trace"] }
tower_governor = "0.0.4"
tower = "0.4.13"
utoipa = { version = "3.3.0", features = ["axum_extras", "preserve_order"] }
utoipa-swagger-ui = { version = "3.1.3", features = ["axum", "debug-embed"] }
base64 = "0.21.0"

31
api/Dockerfile Normal file
View file

@ -0,0 +1,31 @@
# This dockerfile builds the API and runs it on a minimal container with the Datastore adaptor
FROM rust:latest as builder
# Install CA Certs for Hyper
RUN apt-get install -y --no-install-recommends ca-certificates
RUN update-ca-certificates
WORKDIR /usr/src/app
COPY . .
# Will build and cache the binary and dependent crates in release mode
RUN --mount=type=cache,target=/usr/local/cargo,from=rust:latest,source=/usr/local/cargo \
--mount=type=cache,target=target \
cargo build --release --features datastore-adaptor && mv ./target/release/crabfit-api ./api
# Runtime image
FROM debian:bullseye-slim
# Run as "app" user
RUN useradd -ms /bin/bash app
USER app
WORKDIR /app
# Get compiled binaries from builder's cargo install directory
COPY --from=builder /usr/src/app/api /app/api
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Run the app
EXPOSE 3000
CMD ./api

36
api/README.md Normal file
View file

@ -0,0 +1,36 @@
# Crab Fit API
This is the API for Crab Fit, written in Rust. It uses the [axum](https://crates.io/crates/axum) framework to run a HTTP server, and supports multiple storage adaptors.
## API docs
OpenAPI compatible API docs are generated using [utoipa](https://crates.io/crates/utoipa). You can visit them at [https://api.crab.fit/docs](https://api.crab.fit/docs).
## Storage adaptors
| Adaptor | Works with |
| ------- | ---------- |
| `memory-adaptor` | Stores data in memory |
| `sql-adaptor` | Postgres, MySQL, SQLite |
| `datastore-adaptor` | Google Datastore |
To choose an adaptor, specify it in the `features` when compiling, e.g. `cargo run --features sql-adaptor`.
Some adaptors require environment variables to be set. You can specify them in a `.env` file and they'll be loaded in using [dotenvy](https://crates.io/crates/dotenvy). See a specific adaptor's readme for more information.
> **Note**
> `memory-adaptor` is the default if no features are specified. Ensure you specify a different adaptor when deploying.
### Adding an adaptor
See [adding an adaptor](adaptors/README.md#adding-an-adaptor) in the adaptors readme.
## Environment
### CORS
In release mode, a `FRONTEND_URL` environment variable is required to correctly restrict cross-origin requests to the frontend.
### Cleanup task
By default, anyone can run the cleanup task at `/tasks/cleanup`. This is usually not an issue, as it's based on when the events were last visited, and not when it's run, but if you'd prefer to restrict runs of the cleanup task (as it can be intensive), set a `CRON_KEY` environment variable in `.env`. This will require sending an `X-Cron-Key` header to the route with a value that matches `CRON_KEY`, or the route will return a 401 Unauthorized error.

22
api/adaptors/README.md Normal file
View file

@ -0,0 +1,22 @@
# Crab Fit Storage Adaptors
This directory contains sub-crates that connect Crab Fit to a database of some sort. For a list of available adaptors, see the [api readme](../README.md).
## Adding an adaptor
The suggested flow is copying an existing adaptor, such as `memory`, and altering the code to work with your chosen database.
Note, you will need to have the following crates as dependencies in your adaptor:
- `common`<br>Includes a trait for implementing your adaptor, as well as structs your adaptor needs to return.
- `async-trait`<br>Required because the trait from `common` uses async functions, make sure you include `#[async_trait]` above your trait implementation.
- `chrono`<br>Required to deal with dates in the common structs and trait function signatures.
Once you've created the adaptor, you'll need to make sure it's included as a dependency in the root [`Cargo.toml`](../Cargo.toml), and add a feature flag with the same name. Make sure you also document the new adaptor in the [api readme](../README.md).
Finally, add a new version of the `create_adaptor` function in the [`adaptors.rs`](../src/adaptors.rs) file that will only compile if the specific feature flag you added is set. Don't forget to add a `not` version of the feature to the default memory adaptor function at the bottom of the file.
## FAQ
Why is it spelt "adaptor" and not "adapter"?
> The maintainer lives in Australia, where it's usually spelt "adaptor" 😎

View file

@ -0,0 +1,14 @@
[package]
name = "datastore-adaptor"
version = "0.1.0"
edition = "2021"
[dependencies]
async-trait = "0.1.68"
chrono = "0.4.24"
common = { path = "../../common" }
# Uses custom version of google-cloud that has support for NULL values
google-cloud = { git = "https://github.com/GRA0007/google-cloud-rs.git", features = ["datastore", "derive"] }
serde = "1.0.163"
serde_json = "1.0.96"
tokio = { version = "1.28.1", features = ["rt-multi-thread"] }

View file

@ -0,0 +1,13 @@
# Google Datastore Adaptor
This adaptor works with [Google Cloud Datastore](https://cloud.google.com/datastore). Please note that it's compatible with Firestore in Datastore mode, but not with Firestore.
## Environment
To use this adaptor, make sure you have the `GCP_CREDENTIALS` environment variable set to your service account credentials in JSON format. See [this page](https://developers.google.com/workspace/guides/create-credentials#service-account) for info on setting up a service account and generating credentials.
Example:
```env
GCP_CREDENTIALS='{"type":"service_account","project_id":"my-project"}'
```

View file

@ -0,0 +1,328 @@
use std::{env, error::Error, fmt::Display};
use async_trait::async_trait;
use chrono::{DateTime, NaiveDateTime, Utc};
use common::{Adaptor, Event, Person, Stats};
use google_cloud::{
authorize::ApplicationCredentials,
datastore::{Client, Filter, FromValue, IntoValue, Key, KeyID, Query},
};
use tokio::sync::Mutex;
pub struct DatastoreAdaptor {
client: Mutex<Client>,
}
// Keys
const STATS_KIND: &str = "Stats";
const EVENT_KIND: &str = "Event";
const PERSON_KIND: &str = "Person";
const STATS_EVENTS_ID: &str = "eventCount";
const STATS_PEOPLE_ID: &str = "personCount";
#[async_trait]
impl Adaptor for DatastoreAdaptor {
type Error = DatastoreAdaptorError;
async fn get_stats(&self) -> Result<Stats, Self::Error> {
let mut client = self.client.lock().await;
let event_key = Key::new(STATS_KIND).id(STATS_EVENTS_ID);
let event_stats: DatastoreStats = client.get(event_key).await?.unwrap_or_default();
let person_key = Key::new(STATS_KIND).id(STATS_PEOPLE_ID);
let person_stats: DatastoreStats = client.get(person_key).await?.unwrap_or_default();
Ok(Stats {
event_count: event_stats.value,
person_count: person_stats.value,
})
}
async fn increment_stat_event_count(&self) -> Result<i64, Self::Error> {
let mut client = self.client.lock().await;
let key = Key::new(STATS_KIND).id(STATS_EVENTS_ID);
let mut event_stats: DatastoreStats = client.get(key.clone()).await?.unwrap_or_default();
event_stats.value += 1;
client.put((key, event_stats.clone())).await?;
Ok(event_stats.value)
}
async fn increment_stat_person_count(&self) -> Result<i64, Self::Error> {
let mut client = self.client.lock().await;
let key = Key::new(STATS_KIND).id(STATS_PEOPLE_ID);
let mut person_stats: DatastoreStats = client.get(key.clone()).await?.unwrap_or_default();
person_stats.value += 1;
client.put((key, person_stats.clone())).await?;
Ok(person_stats.value)
}
async fn get_people(&self, event_id: String) -> Result<Option<Vec<Person>>, Self::Error> {
let mut client = self.client.lock().await;
// Check the event exists
if client
.get::<DatastoreEvent, _>(Key::new(EVENT_KIND).id(event_id.clone()))
.await?
.is_none()
{
return Ok(None);
}
Ok(Some(
client
.query(
Query::new(PERSON_KIND)
.filter(Filter::Equal("eventId".into(), event_id.into_value())),
)
.await?
.into_iter()
.filter_map(|entity| {
DatastorePerson::from_value(entity.properties().clone())
.ok()
.map(|ds_person| ds_person.into())
})
.collect(),
))
}
async fn upsert_person(
&self,
event_id: String,
person: Person,
) -> Result<Option<Person>, Self::Error> {
let mut client = self.client.lock().await;
// Check the event exists
if client
.get::<DatastoreEvent, _>(Key::new(EVENT_KIND).id(event_id.clone()))
.await?
.is_none()
{
return Ok(None);
}
// Check if person exists
let existing_person = client
.query(
Query::new(PERSON_KIND)
.filter(Filter::Equal(
"eventId".into(),
event_id.clone().into_value(),
))
.filter(Filter::Equal(
"name".into(),
person.name.clone().into_value(),
)),
)
.await?;
let mut key = Key::new(PERSON_KIND);
if let Some(entity) = existing_person.first() {
key = entity.key().clone();
}
client
.put((key, DatastorePerson::from_person(person.clone(), event_id)))
.await?;
Ok(Some(person))
}
async fn get_event(&self, id: String) -> Result<Option<Event>, Self::Error> {
let mut client = self.client.lock().await;
let key = Key::new(EVENT_KIND).id(id.clone());
let existing_event = client.get::<DatastoreEvent, _>(key.clone()).await?;
// Mark as visited if it exists
if let Some(mut event) = existing_event.clone() {
event.visited = Utc::now().timestamp();
client.put((key, event)).await?;
}
Ok(existing_event.map(|e| e.to_event(id)))
}
async fn create_event(&self, event: Event) -> Result<Event, Self::Error> {
let mut client = self.client.lock().await;
let key = Key::new(EVENT_KIND).id(event.id.clone());
let ds_event: DatastoreEvent = event.clone().into();
client.put((key, ds_event)).await?;
Ok(event)
}
async fn delete_events(&self, cutoff: DateTime<Utc>) -> Result<Stats, Self::Error> {
let mut client = self.client.lock().await;
let mut keys_to_delete: Vec<Key> = client
.query(Query::new(EVENT_KIND).filter(Filter::LesserThan(
"visited".into(),
cutoff.timestamp().into_value(),
)))
.await?
.iter()
.map(|entity| entity.key().clone())
.collect();
let event_count = keys_to_delete.len() as i64;
let events_to_delete = keys_to_delete.clone();
for e in events_to_delete.iter() {
if let KeyID::StringID(id) = e.get_id() {
let mut event_people_to_delete: Vec<Key> = client
.query(
Query::new(PERSON_KIND)
.filter(Filter::Equal("eventId".into(), id.clone().into_value())),
)
.await?
.iter()
.map(|entity| entity.key().clone())
.collect();
keys_to_delete.append(&mut event_people_to_delete);
}
}
let person_count = keys_to_delete.len() as i64 - event_count;
client.delete_all(keys_to_delete).await?;
Ok(Stats {
event_count,
person_count,
})
}
}
impl DatastoreAdaptor {
pub async fn new() -> Self {
// Load credentials
let credentials: ApplicationCredentials = serde_json::from_str(
&env::var("GCP_CREDENTIALS").expect("Expected GCP_CREDENTIALS environment variable"),
)
.expect("GCP_CREDENTIALS environment variable is not valid JSON");
// Connect to datastore
let client = Client::from_credentials(credentials.project_id.clone(), credentials.clone())
.await
.expect("Failed to setup datastore client");
let client = Mutex::new(client);
println!(
"🎛️ Connected to datastore in project {}",
credentials.project_id
);
Self { client }
}
}
#[derive(FromValue, IntoValue, Default, Clone)]
struct DatastoreStats {
value: i64,
}
#[derive(FromValue, IntoValue, Clone)]
struct DatastoreEvent {
name: String,
created: i64,
visited: i64,
times: Vec<String>,
timezone: String,
}
#[derive(FromValue, IntoValue)]
#[allow(non_snake_case)]
struct DatastorePerson {
name: String,
password: Option<String>,
created: i64,
eventId: String,
availability: Vec<String>,
}
impl From<DatastorePerson> for Person {
fn from(value: DatastorePerson) -> Self {
Self {
name: value.name,
password_hash: value.password,
created_at: unix_to_date(value.created),
availability: value.availability,
}
}
}
impl DatastorePerson {
fn from_person(person: Person, event_id: String) -> Self {
Self {
name: person.name,
password: person.password_hash,
created: person.created_at.timestamp(),
eventId: event_id,
availability: person.availability,
}
}
}
impl From<Event> for DatastoreEvent {
fn from(value: Event) -> Self {
Self {
name: value.name,
created: value.created_at.timestamp(),
visited: value.visited_at.timestamp(),
times: value.times,
timezone: value.timezone,
}
}
}
impl DatastoreEvent {
fn to_event(&self, event_id: String) -> Event {
Event {
id: event_id,
name: self.name.clone(),
created_at: unix_to_date(self.created),
visited_at: unix_to_date(self.visited),
times: self.times.clone(),
timezone: self.timezone.clone(),
}
}
}
fn unix_to_date(unix: i64) -> DateTime<Utc> {
DateTime::from_utc(NaiveDateTime::from_timestamp_opt(unix, 0).unwrap(), Utc)
}
#[derive(Debug)]
pub enum DatastoreAdaptorError {
DatastoreError(google_cloud::error::Error),
}
impl Display for DatastoreAdaptorError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
DatastoreAdaptorError::DatastoreError(e) => write!(f, "Datastore Error: {}", e),
}
}
}
impl Error for DatastoreAdaptorError {}
impl From<google_cloud::error::Error> for DatastoreAdaptorError {
fn from(value: google_cloud::error::Error) -> Self {
Self::DatastoreError(value)
}
}
impl From<google_cloud::error::ConvertError> for DatastoreAdaptorError {
fn from(value: google_cloud::error::ConvertError) -> Self {
Self::DatastoreError(google_cloud::error::Error::Convert(value))
}
}

View file

@ -0,0 +1,10 @@
[package]
name = "memory-adaptor"
version = "0.1.0"
edition = "2021"
[dependencies]
async-trait = "0.1.68"
chrono = "0.4.24"
common = { path = "../../common" }
tokio = { version = "1.28.1", features = ["rt-multi-thread"] }

View file

@ -0,0 +1,6 @@
# Memory Adaptor
This adaptor stores everything in memory, and all data is lost when the API is stopped. Useful for testing.
> **Warning**
> Do not use this adaptor in production!

View file

@ -0,0 +1,167 @@
use std::{collections::HashMap, error::Error, fmt::Display};
use async_trait::async_trait;
use chrono::{DateTime, Utc};
use common::{Adaptor, Event, Person, Stats};
use tokio::sync::Mutex;
struct State {
stats: Stats,
events: HashMap<String, Event>,
people: HashMap<(String, String), Person>,
}
pub struct MemoryAdaptor {
state: Mutex<State>,
}
#[async_trait]
impl Adaptor for MemoryAdaptor {
type Error = MemoryAdaptorError;
async fn get_stats(&self) -> Result<Stats, Self::Error> {
let state = self.state.lock().await;
Ok(state.stats.clone())
}
async fn increment_stat_event_count(&self) -> Result<i64, Self::Error> {
let mut state = self.state.lock().await;
state.stats.event_count += 1;
Ok(state.stats.event_count)
}
async fn increment_stat_person_count(&self) -> Result<i64, Self::Error> {
let mut state = self.state.lock().await;
state.stats.person_count += 1;
Ok(state.stats.person_count)
}
async fn get_people(&self, event_id: String) -> Result<Option<Vec<Person>>, Self::Error> {
let state = self.state.lock().await;
// Event doesn't exist
if state.events.get(&event_id).is_none() {
return Ok(None);
}
Ok(Some(
state
.people
.clone()
.into_iter()
.filter_map(|((p_event_id, _), p)| {
if p_event_id == event_id {
Some(p)
} else {
None
}
})
.collect(),
))
}
async fn upsert_person(
&self,
event_id: String,
person: Person,
) -> Result<Option<Person>, Self::Error> {
let mut state = self.state.lock().await;
// Check event exists
if state.events.get(&event_id).is_none() {
return Ok(None);
}
state
.people
.insert((event_id, person.name.clone()), person.clone());
Ok(Some(person))
}
async fn get_event(&self, id: String) -> Result<Option<Event>, Self::Error> {
let mut state = self.state.lock().await;
let event = state.events.get(&id).cloned();
if let Some(mut event) = event.clone() {
event.visited_at = Utc::now();
state.events.insert(id, event);
}
Ok(event)
}
async fn create_event(&self, event: Event) -> Result<Event, Self::Error> {
let mut state = self.state.lock().await;
state.events.insert(event.id.clone(), event.clone());
Ok(event)
}
async fn delete_events(&self, cutoff: DateTime<Utc>) -> Result<Stats, Self::Error> {
let mut state = self.state.lock().await;
// Delete events older than cutoff date
let mut deleted_event_ids: Vec<String> = Vec::new();
state.events = state
.events
.clone()
.into_iter()
.filter(|(id, event)| {
if event.visited_at >= cutoff {
true
} else {
deleted_event_ids.push(id.into());
false
}
})
.collect();
let mut person_count = state.people.len() as i64;
state.people = state
.people
.clone()
.into_iter()
.filter(|((event_id, _), _)| deleted_event_ids.contains(event_id))
.collect();
person_count -= state.people.len() as i64;
Ok(Stats {
event_count: deleted_event_ids.len() as i64,
person_count,
})
}
}
impl MemoryAdaptor {
pub async fn new() -> Self {
println!("🧠 Using in-memory storage");
println!("🚨 WARNING: All data will be lost when the process ends. Make sure you choose a database adaptor before deploying.");
let state = Mutex::new(State {
stats: Stats {
event_count: 0,
person_count: 0,
},
events: HashMap::new(),
people: HashMap::new(),
});
Self { state }
}
}
#[derive(Debug)]
pub enum MemoryAdaptorError {}
impl Display for MemoryAdaptorError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Memory adaptor error")
}
}
impl Error for MemoryAdaptorError {}

View file

@ -0,0 +1,14 @@
[package]
name = "sql-adaptor"
version = "0.1.0"
edition = "2021"
[dependencies]
async-trait = "0.1.68"
common = { path = "../../common" }
sea-orm = { version = "0.11.3", features = [ "macros", "sqlx-mysql", "sqlx-postgres", "sqlx-sqlite", "runtime-tokio-native-tls" ] }
serde = { version = "1.0.162", features = [ "derive" ] }
async-std = { version = "1", features = ["attributes", "tokio1"] }
sea-orm-migration = "0.11.0"
serde_json = "1.0.96"
chrono = "0.4.24"

View file

@ -0,0 +1,13 @@
# SQL Adaptor
This adaptor works with [Postgres](https://www.postgresql.org/), [MySQL](https://www.mysql.com/) or [SQLite](https://sqlite.org/index.html) databases.
## Environment
To use this adaptor, make sure you have the `DATABASE_URL` environment variable set to the database url for your chosen database.
Example:
```env
DATABASE_URL="postgresql://username:password@localhost:5432/crabfit"
```

View file

@ -0,0 +1,29 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.11.3
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "event")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub id: String,
pub name: String,
pub created_at: DateTime,
pub visited_at: DateTime,
pub times: Json,
pub timezone: String,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_many = "super::person::Entity")]
Person,
}
impl Related<super::person::Entity> for Entity {
fn to() -> RelationDef {
Relation::Person.def()
}
}
impl ActiveModelBehavior for ActiveModel {}

View file

@ -0,0 +1,7 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.11.3
pub mod prelude;
pub mod event;
pub mod person;
pub mod stats;

View file

@ -0,0 +1,35 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.11.3
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "person")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub name: String,
pub password_hash: Option<String>,
pub created_at: DateTime,
pub availability: Json,
#[sea_orm(primary_key, auto_increment = false)]
pub event_id: String,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(
belongs_to = "super::event::Entity",
from = "Column::EventId",
to = "super::event::Column::Id",
on_update = "Cascade",
on_delete = "Cascade"
)]
Event,
}
impl Related<super::event::Entity> for Entity {
fn to() -> RelationDef {
Relation::Event.def()
}
}
impl ActiveModelBehavior for ActiveModel {}

View file

@ -0,0 +1,5 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.11.3
pub use super::event::Entity as Event;
pub use super::person::Entity as Person;
pub use super::stats::Entity as Stats;

View file

@ -0,0 +1,17 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.11.3
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "stats")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub event_count: i32,
pub person_count: i32,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

253
api/adaptors/sql/src/lib.rs Normal file
View file

@ -0,0 +1,253 @@
use std::{env, error::Error};
use async_trait::async_trait;
use chrono::{DateTime, Utc};
use common::{Adaptor, Event, Person, Stats};
use entity::{event, person, stats};
use migration::{Migrator, MigratorTrait};
use sea_orm::{
strum::Display,
ActiveModelTrait,
ActiveValue::{NotSet, Set},
ColumnTrait, Database, DatabaseConnection, DbErr, EntityTrait, ModelTrait, QueryFilter,
TransactionError, TransactionTrait, TryIntoModel,
};
use serde_json::json;
mod entity;
mod migration;
pub struct SqlAdaptor {
db: DatabaseConnection,
}
#[async_trait]
impl Adaptor for SqlAdaptor {
type Error = SqlAdaptorError;
async fn get_stats(&self) -> Result<Stats, Self::Error> {
let stats_row = get_stats_row(&self.db).await?;
Ok(Stats {
event_count: stats_row.event_count.unwrap() as i64,
person_count: stats_row.person_count.unwrap() as i64,
})
}
async fn increment_stat_event_count(&self) -> Result<i64, Self::Error> {
let mut current_stats = get_stats_row(&self.db).await?;
current_stats.event_count = Set(current_stats.event_count.unwrap() + 1);
Ok(current_stats.save(&self.db).await?.event_count.unwrap() as i64)
}
async fn increment_stat_person_count(&self) -> Result<i64, Self::Error> {
let mut current_stats = get_stats_row(&self.db).await?;
current_stats.person_count = Set(current_stats.person_count.unwrap() + 1);
Ok(current_stats.save(&self.db).await?.person_count.unwrap() as i64)
}
async fn get_people(&self, event_id: String) -> Result<Option<Vec<Person>>, Self::Error> {
// TODO: optimize into one query
let event_row = event::Entity::find_by_id(event_id).one(&self.db).await?;
Ok(match event_row {
Some(event) => Some(
event
.find_related(person::Entity)
.all(&self.db)
.await?
.into_iter()
.map(|model| model.into())
.collect(),
),
None => None,
})
}
async fn upsert_person(
&self,
event_id: String,
person: Person,
) -> Result<Option<Person>, Self::Error> {
let data = person::ActiveModel {
name: Set(person.name.clone()),
password_hash: Set(person.password_hash),
created_at: Set(person.created_at.naive_utc()),
availability: Set(serde_json::to_value(person.availability).unwrap_or(json!([]))),
event_id: Set(event_id.clone()),
};
// Check if the event exists
if event::Entity::find_by_id(event_id.clone())
.one(&self.db)
.await?
.is_none()
{
return Ok(None);
}
Ok(Some(
match person::Entity::find_by_id((person.name, event_id))
.one(&self.db)
.await?
{
Some(_) => data.update(&self.db).await?.try_into_model()?.into(),
None => data.insert(&self.db).await?.try_into_model()?.into(),
},
))
}
async fn get_event(&self, id: String) -> Result<Option<Event>, Self::Error> {
let existing_event = event::Entity::find_by_id(id).one(&self.db).await?;
// Mark as visited
if let Some(event) = existing_event.clone() {
let mut event: event::ActiveModel = event.into();
event.visited_at = Set(Utc::now().naive_utc());
event.save(&self.db).await?;
}
Ok(existing_event.map(|model| model.into()))
}
async fn create_event(&self, event: Event) -> Result<Event, Self::Error> {
Ok(event::ActiveModel {
id: Set(event.id),
name: Set(event.name),
created_at: Set(event.created_at.naive_utc()),
visited_at: Set(event.visited_at.naive_utc()),
times: Set(serde_json::to_value(event.times).unwrap_or(json!([]))),
timezone: Set(event.timezone),
}
.insert(&self.db)
.await?
.try_into_model()?
.into())
}
async fn delete_events(&self, cutoff: DateTime<Utc>) -> Result<Stats, Self::Error> {
let (event_count, person_count) = self
.db
.transaction::<_, (i64, i64), DbErr>(|t| {
Box::pin(async move {
// Get events older than the cutoff date
let old_events = event::Entity::find()
.filter(event::Column::VisitedAt.lt(cutoff.naive_utc()))
.all(t)
.await?;
// Delete people
let mut people_deleted: i64 = 0;
// TODO: run concurrently
for e in old_events.iter() {
let people_delete_result = person::Entity::delete_many()
.filter(person::Column::EventId.eq(&e.id))
.exec(t)
.await?;
people_deleted += people_delete_result.rows_affected as i64;
}
// Delete events
let event_delete_result = event::Entity::delete_many()
.filter(event::Column::VisitedAt.lt(cutoff.naive_utc()))
.exec(t)
.await?;
Ok((event_delete_result.rows_affected as i64, people_deleted))
})
})
.await?;
Ok(Stats {
event_count,
person_count,
})
}
}
// Get the current stats as an ActiveModel
async fn get_stats_row(db: &DatabaseConnection) -> Result<stats::ActiveModel, DbErr> {
let current_stats = stats::Entity::find().one(db).await?;
Ok(match current_stats {
Some(model) => model.into(),
None => stats::ActiveModel {
id: NotSet,
event_count: Set(0),
person_count: Set(0),
},
})
}
impl SqlAdaptor {
pub async fn new() -> Self {
let connection_string =
env::var("DATABASE_URL").expect("Expected DATABASE_URL environment variable");
// Connect to the database
let db = Database::connect(&connection_string)
.await
.expect("Failed to connect to SQL database");
println!(
"{} Connected to database at {}",
match db {
DatabaseConnection::SqlxMySqlPoolConnection(_) => "🐬",
DatabaseConnection::SqlxPostgresPoolConnection(_) => "🐘",
DatabaseConnection::SqlxSqlitePoolConnection(_) => "🪶",
DatabaseConnection::Disconnected => panic!("Failed to connect to SQL database"),
},
connection_string
);
// Setup tables
Migrator::up(&db, None)
.await
.expect("Failed to set up tables in the database");
Self { db }
}
}
impl From<event::Model> for Event {
fn from(value: event::Model) -> Self {
Self {
id: value.id,
name: value.name,
created_at: DateTime::<Utc>::from_utc(value.created_at, Utc),
visited_at: DateTime::<Utc>::from_utc(value.visited_at, Utc),
times: serde_json::from_value(value.times).unwrap_or(vec![]),
timezone: value.timezone,
}
}
}
impl From<person::Model> for Person {
fn from(value: person::Model) -> Self {
Self {
name: value.name,
password_hash: value.password_hash,
created_at: DateTime::<Utc>::from_utc(value.created_at, Utc),
availability: serde_json::from_value(value.availability).unwrap_or(vec![]),
}
}
}
#[derive(Display, Debug)]
pub enum SqlAdaptorError {
DbErr(DbErr),
TransactionError(TransactionError<DbErr>),
}
impl Error for SqlAdaptorError {}
impl From<DbErr> for SqlAdaptorError {
fn from(value: DbErr) -> Self {
Self::DbErr(value)
}
}
impl From<TransactionError<DbErr>> for SqlAdaptorError {
fn from(value: TransactionError<DbErr>) -> Self {
Self::TransactionError(value)
}
}

View file

@ -0,0 +1,122 @@
use sea_orm_migration::prelude::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
print!("Setting up database...");
// Stats table
manager
.create_table(
Table::create()
.table(Stats::Table)
.if_not_exists()
.col(
ColumnDef::new(Stats::Id)
.integer()
.not_null()
.auto_increment()
.primary_key(),
)
.col(ColumnDef::new(Stats::EventCount).integer().not_null())
.col(ColumnDef::new(Stats::PersonCount).integer().not_null())
.to_owned(),
)
.await?;
// Events table
manager
.create_table(
Table::create()
.table(Event::Table)
.if_not_exists()
.col(ColumnDef::new(Event::Id).string().not_null().primary_key())
.col(ColumnDef::new(Event::Name).string().not_null())
.col(ColumnDef::new(Event::CreatedAt).timestamp().not_null())
.col(ColumnDef::new(Event::VisitedAt).timestamp().not_null())
.col(ColumnDef::new(Event::Times).json().not_null())
.col(ColumnDef::new(Event::Timezone).string().not_null())
.to_owned(),
)
.await?;
// People table
manager
.create_table(
Table::create()
.table(Person::Table)
.if_not_exists()
.col(ColumnDef::new(Person::Name).string().not_null())
.col(ColumnDef::new(Person::PasswordHash).string())
.col(ColumnDef::new(Person::CreatedAt).timestamp().not_null())
.col(ColumnDef::new(Person::Availability).json().not_null())
.col(ColumnDef::new(Person::EventId).string().not_null())
.primary_key(Index::create().col(Person::EventId).col(Person::Name))
.to_owned(),
)
.await?;
// Relation
manager
.create_foreign_key(
ForeignKey::create()
.name("FK_person_event")
.from(Person::Table, Person::EventId)
.to(Event::Table, Event::Id)
.on_delete(ForeignKeyAction::Cascade)
.on_update(ForeignKeyAction::Cascade)
.to_owned(),
)
.await?;
println!(" done");
Ok(())
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.drop_table(Table::drop().table(Stats::Table).to_owned())
.await?;
manager
.drop_table(Table::drop().table(Person::Table).to_owned())
.await?;
manager
.drop_table(Table::drop().table(Event::Table).to_owned())
.await?;
Ok(())
}
}
/// Learn more at https://docs.rs/sea-query#iden
#[derive(Iden)]
enum Stats {
Table,
Id,
EventCount,
PersonCount,
}
#[derive(Iden)]
enum Event {
Table,
Id,
Name,
CreatedAt,
VisitedAt,
Times,
Timezone,
}
#[derive(Iden)]
enum Person {
Table,
Name,
PasswordHash,
CreatedAt,
Availability,
EventId,
}

View file

@ -0,0 +1,12 @@
pub use sea_orm_migration::prelude::*;
mod m01_setup_tables;
pub struct Migrator;
#[async_trait::async_trait]
impl MigratorTrait for Migrator {
fn migrations() -> Vec<Box<dyn MigrationTrait>> {
vec![Box::new(m01_setup_tables::Migration)]
}
}

9
api/common/Cargo.toml Normal file
View file

@ -0,0 +1,9 @@
[package]
name = "common"
description = "Shared structs and traits for the data storage and transfer of Crab Fit"
version = "0.1.0"
edition = "2021"
[dependencies]
async-trait = "0.1.68"
chrono = "0.4.24"

3
api/common/README.md Normal file
View file

@ -0,0 +1,3 @@
# Common
This crate contains the adaptor trait, and structs that are used by it. These are separated into their own crate so that the root crate and the adaptors can import from it without causing a circular dependency.

54
api/common/src/lib.rs Normal file
View file

@ -0,0 +1,54 @@
use std::error::Error;
use async_trait::async_trait;
use chrono::{DateTime, Utc};
/// Data storage adaptor, all methods on an adaptor can return an error if
/// something goes wrong, or potentially None if the data requested was not found.
#[async_trait]
pub trait Adaptor: Send + Sync {
type Error: Error;
async fn get_stats(&self) -> Result<Stats, Self::Error>;
async fn increment_stat_event_count(&self) -> Result<i64, Self::Error>;
async fn increment_stat_person_count(&self) -> Result<i64, Self::Error>;
async fn get_people(&self, event_id: String) -> Result<Option<Vec<Person>>, Self::Error>;
async fn upsert_person(
&self,
event_id: String,
person: Person,
) -> Result<Option<Person>, Self::Error>;
/// Get an event and update visited date to current time
async fn get_event(&self, id: String) -> Result<Option<Event>, Self::Error>;
async fn create_event(&self, event: Event) -> Result<Event, Self::Error>;
/// Delete events older than a cutoff date, as well as any associated people
/// Returns the amount of events and people deleted
async fn delete_events(&self, cutoff: DateTime<Utc>) -> Result<Stats, Self::Error>;
}
#[derive(Clone)]
pub struct Stats {
pub event_count: i64,
pub person_count: i64,
}
#[derive(Clone)]
pub struct Event {
pub id: String,
pub name: String,
pub created_at: DateTime<Utc>,
pub visited_at: DateTime<Utc>,
pub times: Vec<String>,
pub timezone: String,
}
#[derive(Clone)]
pub struct Person {
pub name: String,
pub password_hash: Option<String>,
pub created_at: DateTime<Utc>,
pub availability: Vec<String>,
}

15
api/src/adaptors.rs Normal file
View file

@ -0,0 +1,15 @@
#[cfg(feature = "sql-adaptor")]
pub async fn create_adaptor() -> sql_adaptor::SqlAdaptor {
sql_adaptor::SqlAdaptor::new().await
}
#[cfg(feature = "datastore-adaptor")]
pub async fn create_adaptor() -> datastore_adaptor::DatastoreAdaptor {
datastore_adaptor::DatastoreAdaptor::new().await
}
#[cfg(not(feature = "sql-adaptor"))]
#[cfg(not(feature = "datastore-adaptor"))]
pub async fn create_adaptor() -> memory_adaptor::MemoryAdaptor {
memory_adaptor::MemoryAdaptor::new().await
}

60
api/src/docs.rs Normal file
View file

@ -0,0 +1,60 @@
use crate::payloads;
use crate::routes;
use utoipa::openapi::security::ApiKey;
use utoipa::openapi::security::ApiKeyValue;
use utoipa::{
openapi::security::{HttpAuthScheme, HttpBuilder, SecurityScheme},
Modify, OpenApi,
};
// OpenAPI documentation
#[derive(OpenApi)]
#[openapi(
info(title = "Crab Fit API"),
paths(
routes::stats::get_stats,
routes::event::create_event,
routes::event::get_event,
routes::person::get_people,
routes::person::get_person,
routes::person::update_person,
routes::tasks::cleanup,
),
components(schemas(
payloads::StatsResponse,
payloads::EventResponse,
payloads::PersonResponse,
payloads::EventInput,
payloads::PersonInput,
)),
tags(
(name = "info"),
(name = "event"),
(name = "person"),
(name = "tasks"),
),
modifiers(&SecurityAddon),
)]
pub struct ApiDoc;
struct SecurityAddon;
// Add password auth spec
impl Modify for SecurityAddon {
fn modify(&self, openapi: &mut utoipa::openapi::OpenApi) {
openapi.components.as_mut().unwrap().add_security_scheme(
"password",
SecurityScheme::Http(
HttpBuilder::new()
.scheme(HttpAuthScheme::Bearer)
.bearer_format("base64")
.build(),
),
);
openapi.components.as_mut().unwrap().add_security_scheme(
"cron-key",
SecurityScheme::ApiKey(ApiKey::Header(ApiKeyValue::new("X-Cron-Key"))),
);
}
}

22
api/src/errors.rs Normal file
View file

@ -0,0 +1,22 @@
use axum::{http::StatusCode, response::IntoResponse};
use common::Adaptor;
pub enum ApiError<A: Adaptor> {
AdaptorError(A::Error),
NotFound,
NotAuthorized,
}
// Define what the error types above should return
impl<A: Adaptor> IntoResponse for ApiError<A> {
fn into_response(self) -> axum::response::Response {
match self {
ApiError::AdaptorError(e) => {
tracing::error!(?e);
StatusCode::INTERNAL_SERVER_ERROR.into_response()
}
ApiError::NotFound => StatusCode::NOT_FOUND.into_response(),
ApiError::NotAuthorized => StatusCode::UNAUTHORIZED.into_response(),
}
}
}

116
api/src/main.rs Normal file
View file

@ -0,0 +1,116 @@
use std::{env, net::SocketAddr, sync::Arc};
use axum::{
error_handling::HandleErrorLayer,
extract,
http::{HeaderValue, Method},
routing::{get, patch, post},
BoxError, Router, Server,
};
use routes::*;
use tokio::sync::Mutex;
use tower::ServiceBuilder;
use tower_governor::{errors::display_error, governor::GovernorConfigBuilder, GovernorLayer};
use tower_http::{cors::CorsLayer, trace::TraceLayer};
use tracing::Level;
use utoipa::OpenApi;
use utoipa_swagger_ui::SwaggerUi;
use crate::adaptors::create_adaptor;
use crate::docs::ApiDoc;
mod adaptors;
mod docs;
mod errors;
mod payloads;
mod routes;
pub struct ApiState<A> {
adaptor: A,
}
pub type State<A> = extract::State<Arc<Mutex<ApiState<A>>>>;
#[tokio::main]
async fn main() {
tracing_subscriber::fmt().with_max_level(Level::INFO).init();
// Load env
dotenvy::dotenv().ok();
let shared_state = Arc::new(Mutex::new(ApiState {
adaptor: create_adaptor().await,
}));
// CORS configuration
let cors = CorsLayer::new()
.allow_methods([Method::GET, Method::POST, Method::PATCH])
.allow_origin(
if cfg!(debug_assertions) {
"http://localhost:1234".to_owned()
} else {
env::var("FRONTEND_URL").expect("Missing FRONTEND_URL environment variable")
}
.parse::<HeaderValue>()
.unwrap(),
);
// Rate limiting configuration (using tower_governor)
// From the docs: Allows bursts with up to eight requests and replenishes
// one element after 500ms, based on peer IP.
let governor_config = Box::new(GovernorConfigBuilder::default().finish().unwrap());
let rate_limit = ServiceBuilder::new()
// Handle errors from governor and convert into HTTP responses
.layer(HandleErrorLayer::new(|e: BoxError| async move {
display_error(e)
}))
.layer(GovernorLayer {
config: Box::leak(governor_config),
});
let app = Router::new()
.merge(SwaggerUi::new("/docs").url("/docs/openapi.json", ApiDoc::openapi()))
.route("/", get(get_root))
.route("/stats", get(stats::get_stats))
.route("/event", post(event::create_event))
.route("/event/:event_id", get(event::get_event))
.route("/event/:event_id/people", get(person::get_people))
.route(
"/event/:event_id/people/:person_name",
get(person::get_person),
)
.route(
"/event/:event_id/people/:person_name",
patch(person::update_person),
)
.route("/tasks/cleanup", get(tasks::cleanup))
.with_state(shared_state)
.layer(cors)
.layer(rate_limit)
.layer(TraceLayer::new_for_http());
let addr = SocketAddr::from(([0, 0, 0, 0], 3000));
println!(
"🦀 Crab Fit API listening at http://{} in {} mode",
addr,
if cfg!(debug_assertions) {
"debug"
} else {
"release"
}
);
Server::bind(&addr)
.serve(app.into_make_service_with_connect_info::<SocketAddr>())
.with_graceful_shutdown(async {
tokio::signal::ctrl_c()
.await
.expect("Failed to install Ctrl+C handler")
})
.await
.unwrap();
}
async fn get_root() -> String {
format!("Crab Fit API v{}", env!("CARGO_PKG_VERSION"))
}

75
api/src/payloads.rs Normal file
View file

@ -0,0 +1,75 @@
use axum::Json;
use common::{Event, Person, Stats};
use serde::{Deserialize, Serialize};
use utoipa::ToSchema;
use crate::errors::ApiError;
pub type ApiResult<T, A> = Result<Json<T>, ApiError<A>>;
#[derive(Deserialize, ToSchema)]
pub struct EventInput {
pub name: Option<String>,
pub times: Vec<String>,
pub timezone: String,
}
#[derive(Serialize, ToSchema)]
pub struct EventResponse {
pub id: String,
pub name: String,
pub times: Vec<String>,
pub timezone: String,
pub created_at: i64,
}
impl From<Event> for EventResponse {
fn from(value: Event) -> Self {
Self {
id: value.id,
name: value.name,
times: value.times,
timezone: value.timezone,
created_at: value.created_at.timestamp(),
}
}
}
#[derive(Serialize, ToSchema)]
pub struct StatsResponse {
pub event_count: i64,
pub person_count: i64,
pub version: String,
}
impl From<Stats> for StatsResponse {
fn from(value: Stats) -> Self {
Self {
event_count: value.event_count,
person_count: value.person_count,
version: env!("CARGO_PKG_VERSION").to_string(),
}
}
}
#[derive(Serialize, ToSchema)]
pub struct PersonResponse {
pub name: String,
pub availability: Vec<String>,
pub created_at: i64,
}
impl From<Person> for PersonResponse {
fn from(value: Person) -> Self {
Self {
name: value.name,
availability: value.availability,
created_at: value.created_at.timestamp(),
}
}
}
#[derive(Deserialize, ToSchema)]
pub struct PersonInput {
pub availability: Vec<String>,
}

201
api/src/res/adjectives.json Normal file
View file

@ -0,0 +1,201 @@
[
"Adorable",
"Adventurous",
"Aggressive",
"Agreeable",
"Alert",
"Alive",
"Amused",
"Angry",
"Annoyed",
"Annoying",
"Anxious",
"Arrogant",
"Ashamed",
"Attractive",
"Average",
"Beautiful",
"Better",
"Bewildered",
"Blue",
"Blushing",
"Bored",
"Brainy",
"Brave",
"Breakable",
"Bright",
"Busy",
"Calm",
"Careful",
"Cautious",
"Charming",
"Cheerful",
"Clean",
"Clear",
"Clever",
"Cloudy",
"Clumsy",
"Colorful",
"Comfortable",
"Concerned",
"Confused",
"Cooperative",
"Courageous",
"Crazy",
"Creepy",
"Crowded",
"Curious",
"Cute",
"Dangerous",
"Dark",
"Defiant",
"Delightful",
"Depressed",
"Determined",
"Different",
"Difficult",
"Disgusted",
"Distinct",
"Disturbed",
"Dizzy",
"Doubtful",
"Drab",
"Dull",
"Eager",
"Easy",
"Elated",
"Elegant",
"Embarrassed",
"Enchanting",
"Encouraging",
"Energetic",
"Enthusiastic",
"Envious",
"Evil",
"Excited",
"Expensive",
"Exuberant",
"Fair",
"Faithful",
"Famous",
"Fancy",
"Fantastic",
"Fierce",
"Fine",
"Foolish",
"Fragile",
"Frail",
"Frantic",
"Friendly",
"Frightened",
"Funny",
"Gentle",
"Gifted",
"Glamorous",
"Gleaming",
"Glorious",
"Good",
"Gorgeous",
"Graceful",
"Grumpy",
"Handsome",
"Happy",
"Healthy",
"Helpful",
"Hilarious",
"Homely",
"Hungry",
"Important",
"Impossible",
"Inexpensive",
"Innocent",
"Inquisitive",
"Itchy",
"Jealous",
"Jittery",
"Jolly",
"Joyous",
"Kind",
"Lazy",
"Light",
"Lively",
"Lonely",
"Long",
"Lovely",
"Lucky",
"Magnificent",
"Misty",
"Modern",
"Motionless",
"Muddy",
"Mushy",
"Mysterious",
"Naughty",
"Nervous",
"Nice",
"Nutty",
"Obedient",
"Obnoxious",
"Odd",
"Old-fashioned",
"Open",
"Outrageous",
"Outstanding",
"Panicky",
"Perfect",
"Plain",
"Pleasant",
"Poised",
"Powerful",
"Precious",
"Prickly",
"Proud",
"Puzzled",
"Quaint",
"Real",
"Relieved",
"Scary",
"Selfish",
"Shiny",
"Shy",
"Silly",
"Sleepy",
"Smiling",
"Smoggy",
"Sparkling",
"Splendid",
"Spotless",
"Stormy",
"Strange",
"Successful",
"Super",
"Talented",
"Tame",
"Tasty",
"Tender",
"Tense",
"Terrible",
"Thankful",
"Thoughtful",
"Thoughtless",
"Tired",
"Tough",
"Uninterested",
"Unsightly",
"Unusual",
"Upset",
"Uptight",
"Vast",
"Victorious",
"Vivacious",
"Wandering",
"Weary",
"Wicked",
"Wide-eyed",
"Wild",
"Witty",
"Worried",
"Worrisome",
"Zany",
"Zealous"
]

47
api/src/res/crabs.json Normal file
View file

@ -0,0 +1,47 @@
[
"American Horseshoe",
"Atlantic Ghost",
"Baja Elbow",
"Big Claw Purple Hermit",
"Coldwater Mole",
"Cuata Swim",
"Deepwater Frog",
"Dwarf Teardrop",
"Elegant Hermit",
"Flat Spider",
"Ghost",
"Globe Purse",
"Green",
"Halloween",
"Harbor Spider",
"Inflated Spider",
"Left Clawed Hermit",
"Lumpy Claw",
"Magnificent Hermit",
"Mexican Spider",
"Mouthless Land",
"Northern Lemon Rock",
"Pacific Arrow",
"Pacific Mole",
"Paco Box",
"Panamic Spider",
"Purple Shore",
"Red Rock",
"Red Swim",
"Red-leg Hermit",
"Robust Swim",
"Rough Swim",
"Sand Swim",
"Sally Lightfoot",
"Shamed-face Box",
"Shamed-face Heart Box",
"Shell",
"Small Arched Box",
"Southern Kelp",
"Spotted Box",
"Striated Mole",
"Striped Shore",
"Tropical Mole",
"Walking Rock",
"Yellow Shore"
]

140
api/src/routes/event.rs Normal file
View file

@ -0,0 +1,140 @@
use axum::{
extract::{self, Path},
http::StatusCode,
Json,
};
use common::{Adaptor, Event};
use rand::{seq::SliceRandom, thread_rng, Rng};
use regex::Regex;
use crate::{
errors::ApiError,
payloads::{ApiResult, EventInput, EventResponse},
State,
};
#[utoipa::path(
get,
path = "/event/{event_id}",
params(
("event_id", description = "The ID of the event"),
),
responses(
(status = 200, description = "Ok", body = EventResponse),
(status = 404, description = "Not found"),
(status = 429, description = "Too many requests"),
),
tag = "event",
)]
/// Get details about an event
pub async fn get_event<A: Adaptor>(
extract::State(state): State<A>,
Path(event_id): Path<String>,
) -> ApiResult<EventResponse, A> {
let adaptor = &state.lock().await.adaptor;
let event = adaptor
.get_event(event_id)
.await
.map_err(ApiError::AdaptorError)?;
match event {
Some(event) => Ok(Json(event.into())),
None => Err(ApiError::NotFound),
}
}
#[utoipa::path(
post,
path = "/event",
request_body(content = EventInput, description = "New event details"),
responses(
(status = 201, description = "Created", body = EventResponse),
(status = 415, description = "Unsupported input format"),
(status = 422, description = "Invalid input provided"),
(status = 429, description = "Too many requests"),
),
tag = "event",
)]
/// Create a new event
pub async fn create_event<A: Adaptor>(
extract::State(state): State<A>,
Json(input): Json<EventInput>,
) -> Result<(StatusCode, Json<EventResponse>), ApiError<A>> {
let adaptor = &state.lock().await.adaptor;
// Get the current timestamp
let now = chrono::offset::Utc::now();
// Generate a name if none provided
let name = match input.name {
Some(x) if !x.is_empty() => x.trim().to_string(),
_ => generate_name(),
};
// Generate an ID
let mut id = generate_id(&name);
// Check the ID doesn't already exist
while (adaptor
.get_event(id.clone())
.await
.map_err(ApiError::AdaptorError)?)
.is_some()
{
id = generate_id(&name);
}
let event = adaptor
.create_event(Event {
id,
name,
created_at: now,
visited_at: now,
times: input.times,
timezone: input.timezone,
})
.await
.map_err(ApiError::AdaptorError)?;
// Update stats
adaptor
.increment_stat_event_count()
.await
.map_err(ApiError::AdaptorError)?;
Ok((StatusCode::CREATED, Json(event.into())))
}
// Generate a random name based on an adjective and a crab species
fn generate_name() -> String {
let adjectives: Vec<String> =
serde_json::from_slice(include_bytes!("../res/adjectives.json")).unwrap();
let crabs: Vec<String> = serde_json::from_slice(include_bytes!("../res/crabs.json")).unwrap();
format!(
"{} {} Crab",
adjectives.choose(&mut thread_rng()).unwrap(),
crabs.choose(&mut thread_rng()).unwrap()
)
}
// Generate a slug for the crab fit
fn generate_id(name: &str) -> String {
let mut id = encode_name(name.to_string());
if id.replace('-', "").is_empty() {
id = encode_name(generate_name());
}
let number = thread_rng().gen_range(100000..=999999);
format!("{}-{}", id, number)
}
// Use punycode to encode the name
fn encode_name(name: String) -> String {
let pc = punycode::encode(&name.trim().to_lowercase())
.unwrap_or(String::from(""))
.trim()
.replace(|c: char| !c.is_ascii_alphanumeric() && c != ' ', "");
let re = Regex::new(r"\s+").unwrap();
re.replace_all(&pc, "-").to_string()
}

4
api/src/routes/mod.rs Normal file
View file

@ -0,0 +1,4 @@
pub mod event;
pub mod person;
pub mod stats;
pub mod tasks;

216
api/src/routes/person.rs Normal file
View file

@ -0,0 +1,216 @@
use axum::{
extract::{self, Path},
headers::{authorization::Bearer, Authorization},
Json, TypedHeader,
};
use base64::{engine::general_purpose, Engine};
use common::{Adaptor, Person};
use crate::{
errors::ApiError,
payloads::{ApiResult, PersonInput, PersonResponse},
State,
};
#[utoipa::path(
get,
path = "/event/{event_id}/people",
params(
("event_id", description = "The ID of the event"),
),
responses(
(status = 200, description = "Ok", body = [PersonResponse]),
(status = 404, description = "Event not found"),
(status = 429, description = "Too many requests"),
),
tag = "person",
)]
/// Get availabilities for an event
pub async fn get_people<A: Adaptor>(
extract::State(state): State<A>,
Path(event_id): Path<String>,
) -> ApiResult<Vec<PersonResponse>, A> {
let adaptor = &state.lock().await.adaptor;
let people = adaptor
.get_people(event_id)
.await
.map_err(ApiError::AdaptorError)?;
match people {
Some(people) => Ok(Json(people.into_iter().map(|p| p.into()).collect())),
None => Err(ApiError::NotFound),
}
}
#[utoipa::path(
get,
path = "/event/{event_id}/people/{person_name}",
params(
("event_id", description = "The ID of the event"),
("person_name", description = "The name of the person"),
),
security((), ("password" = [])),
responses(
(status = 200, description = "Ok", body = PersonResponse),
(status = 401, description = "Incorrect password"),
(status = 404, description = "Event not found"),
(status = 415, description = "Unsupported input format"),
(status = 422, description = "Invalid input provided"),
(status = 429, description = "Too many requests"),
),
tag = "person",
)]
/// Login or create a person for an event
pub async fn get_person<A: Adaptor>(
extract::State(state): State<A>,
Path((event_id, person_name)): Path<(String, String)>,
bearer: Option<TypedHeader<Authorization<Bearer>>>,
) -> ApiResult<PersonResponse, A> {
let adaptor = &state.lock().await.adaptor;
// Get inputted password
let password = parse_password(bearer);
let existing_people = adaptor
.get_people(event_id.clone())
.await
.map_err(ApiError::AdaptorError)?;
// Event not found
if existing_people.is_none() {
return Err(ApiError::NotFound);
}
// Check if the user already exists
let existing_person = existing_people
.unwrap()
.into_iter()
.find(|p| p.name.to_lowercase() == person_name.to_lowercase());
match existing_person {
// Login
Some(p) => {
// Verify password (if set)
if verify_password(&p, password) {
Ok(Json(p.into()))
} else {
Err(ApiError::NotAuthorized)
}
}
// Signup
None => {
// Update stats
adaptor
.increment_stat_person_count()
.await
.map_err(ApiError::AdaptorError)?;
Ok(Json(
adaptor
.upsert_person(
event_id,
Person {
name: person_name,
password_hash: password
.map(|raw| bcrypt::hash(raw, 10).unwrap_or(String::from(""))),
created_at: chrono::offset::Utc::now(),
availability: vec![],
},
)
.await
.map_err(ApiError::AdaptorError)?
.unwrap()
.into(),
))
}
}
}
#[utoipa::path(
patch,
path = "/event/{event_id}/people/{person_name}",
params(
("event_id", description = "The ID of the event"),
("person_name", description = "The name of the person"),
),
security((), ("password" = [])),
request_body(content = PersonInput, description = "Person details"),
responses(
(status = 200, description = "Ok", body = PersonResponse),
(status = 401, description = "Incorrect password"),
(status = 404, description = "Event or person not found"),
(status = 415, description = "Unsupported input format"),
(status = 422, description = "Invalid input provided"),
(status = 429, description = "Too many requests"),
),
tag = "person",
)]
/// Update a person's availabilities
pub async fn update_person<A: Adaptor>(
extract::State(state): State<A>,
Path((event_id, person_name)): Path<(String, String)>,
bearer: Option<TypedHeader<Authorization<Bearer>>>,
Json(input): Json<PersonInput>,
) -> ApiResult<PersonResponse, A> {
let adaptor = &state.lock().await.adaptor;
let existing_people = adaptor
.get_people(event_id.clone())
.await
.map_err(ApiError::AdaptorError)?;
// Event not found
if existing_people.is_none() {
return Err(ApiError::NotFound);
}
// Check if the user exists
let existing_person = existing_people
.unwrap()
.into_iter()
.find(|p| p.name.to_lowercase() == person_name.to_lowercase())
.ok_or(ApiError::NotFound)?;
// Verify password (if set)
if !verify_password(&existing_person, parse_password(bearer)) {
return Err(ApiError::NotAuthorized);
}
Ok(Json(
adaptor
.upsert_person(
event_id,
Person {
name: existing_person.name,
password_hash: existing_person.password_hash,
created_at: existing_person.created_at,
availability: input.availability,
},
)
.await
.map_err(ApiError::AdaptorError)?
.unwrap()
.into(),
))
}
pub fn parse_password(bearer: Option<TypedHeader<Authorization<Bearer>>>) -> Option<String> {
bearer.map(|TypedHeader(Authorization(b))| {
String::from_utf8(
general_purpose::STANDARD
.decode(b.token().trim())
.unwrap_or(vec![]),
)
.unwrap_or("".to_owned())
})
}
pub fn verify_password(person: &Person, raw: Option<String>) -> bool {
match &person.password_hash {
Some(hash) => bcrypt::verify(raw.unwrap_or("".to_owned()), hash).unwrap_or(false),
// Specifically allow a user who doesn't have a password
// set to log in with or without any password input
None => true,
}
}

26
api/src/routes/stats.rs Normal file
View file

@ -0,0 +1,26 @@
use axum::{extract, Json};
use common::Adaptor;
use crate::{
errors::ApiError,
payloads::{ApiResult, StatsResponse},
State,
};
#[utoipa::path(
get,
path = "/stats",
responses(
(status = 200, description = "Ok", body = StatsResponse),
(status = 429, description = "Too many requests"),
),
tag = "info",
)]
/// Get current stats
pub async fn get_stats<A: Adaptor>(extract::State(state): State<A>) -> ApiResult<StatsResponse, A> {
let adaptor = &state.lock().await.adaptor;
let stats = adaptor.get_stats().await.map_err(ApiError::AdaptorError)?;
Ok(Json(stats.into()))
}

51
api/src/routes/tasks.rs Normal file
View file

@ -0,0 +1,51 @@
use std::env;
use axum::{extract, http::HeaderMap};
use chrono::{Duration, Utc};
use common::Adaptor;
use tracing::info;
use crate::{errors::ApiError, State};
#[utoipa::path(
get,
path = "/tasks/cleanup",
responses(
(status = 200, description = "Cleanup complete"),
(status = 401, description = "Missing or incorrect X-Cron-Key header"),
(status = 429, description = "Too many requests"),
),
security((), ("cron-key" = [])),
tag = "tasks",
)]
/// Delete events older than 3 months
pub async fn cleanup<A: Adaptor>(
extract::State(state): State<A>,
headers: HeaderMap,
) -> Result<(), ApiError<A>> {
// Check cron key
let cron_key_header: String = headers
.get("X-Cron-Key")
.map(|k| k.to_str().unwrap_or_default().into())
.unwrap_or_default();
let env_key = env::var("CRON_KEY").unwrap_or_default();
if !env_key.is_empty() && cron_key_header != env_key {
return Err(ApiError::NotAuthorized);
}
info!("Running cleanup task");
let adaptor = &state.lock().await.adaptor;
let result = adaptor
.delete_events(Utc::now() - Duration::days(90))
.await
.map_err(ApiError::AdaptorError)?;
info!(
"Cleanup successful: {} events and {} people removed",
result.event_count, result.person_count
);
Ok(())
}

1
browser-extension/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
*.zip

View file

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

View file

Before

Width:  |  Height:  |  Size: 416 B

After

Width:  |  Height:  |  Size: 416 B

View file

Before

Width:  |  Height:  |  Size: 841 B

After

Width:  |  Height:  |  Size: 841 B

View file

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View file

@ -1,50 +0,0 @@
module.exports = {
'env': {
'es2021': true,
'node': true
},
'extends': 'eslint:recommended',
'overrides': [
],
'parserOptions': {
'ecmaVersion': 'latest',
'sourceType': 'module'
},
'rules': {
'indent': [
'error',
2
],
'linebreak-style': [
'error',
'unix'
],
'quotes': [
'error',
'single'
],
'semi': [
'error',
'never'
],
'eqeqeq': 2,
'no-return-await': 1,
'no-var': 2,
'prefer-const': 1,
'yoda': 2,
'no-trailing-spaces': 1,
'eol-last': [1, 'always'],
'no-unused-vars': [
1,
{
'args': 'all',
'argsIgnorePattern': '^_',
'ignoreRestSiblings': true
},
],
'arrow-parens': [
'error',
'as-needed'
],
}
}

View file

@ -1,12 +0,0 @@
.gcloudignore
.git
.gitignore
.env
node_modules/
.parcel-cache
res
routes
swagger.yaml

View file

@ -1,8 +0,0 @@
node_modules
dist
.parcel-cache
.env
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View file

@ -1,7 +0,0 @@
runtime: nodejs16
service: api
handlers:
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto

View file

@ -1,9 +0,0 @@
cron:
- description: "clean up old events"
url: /tasks/cleanup
schedule: every monday 09:00
target: api
- description: "remove people with an event id that no longer exists"
url: /tasks/removeOrphans
schedule: 1st wednesday of month 09:00
target: api

View file

@ -1,61 +0,0 @@
import { config } from 'dotenv'
import { Datastore } from '@google-cloud/datastore'
import express from 'express'
import cors from 'cors'
import packageJson from './package.json'
import {
stats,
getEvent,
createEvent,
getPeople,
createPerson,
login,
updatePerson,
taskCleanup,
taskRemoveOrphans,
} from './routes'
config()
const app = express()
const port = 8080
const corsOptions = {
origin: process.env.NODE_ENV === 'production' ? 'https://crab.fit' : 'http://localhost:5173',
}
const datastore = new Datastore({
keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS,
})
app.use(express.json())
app.use((req, _res, next) => {
req.datastore = datastore
req.types = {
event: process.env.NODE_ENV === 'production' ? 'Event' : 'DevEvent',
person: process.env.NODE_ENV === 'production' ? 'Person' : 'DevPerson',
stats: process.env.NODE_ENV === 'production' ? 'Stats' : 'DevStats',
}
next()
})
app.options('*', cors(corsOptions))
app.use(cors(corsOptions))
app.get('/', (_req, res) => res.send(`Crabfit API v${packageJson.version}`))
app.get('/stats', stats)
app.get('/event/:eventId', getEvent)
app.post('/event', createEvent)
app.get('/event/:eventId/people', getPeople)
app.post('/event/:eventId/people', createPerson)
app.post('/event/:eventId/people/:personName', login)
app.patch('/event/:eventId/people/:personName', updatePerson)
// Tasks
app.get('/tasks/cleanup', taskCleanup)
app.get('/tasks/removeOrphans', taskRemoveOrphans)
app.listen(port, () => {
console.log(`Crabfit API listening at http://localhost:${port} in ${process.env.NODE_ENV === 'production' ? 'prod' : 'dev'} mode`)
})

View file

@ -1,34 +0,0 @@
{
"name": "crabfit-backend",
"version": "2.0.0",
"description": "API for Crabfit",
"author": "Ben Grant",
"license": "GPL-3.0-only",
"private": true,
"source": "index.js",
"main": "dist/index.js",
"engines": {
"node": ">=12.0.0"
},
"scripts": {
"build:dev": "NODE_ENV=development parcel build --no-cache",
"dev": "rm -rf .parcel-cache dist && NODE_ENV=development nodemon --exec \"yarn build:dev && yarn start\" --watch routes --watch res --watch index.js",
"build": "parcel build",
"start": "node ./dist/index.js",
"lint": "eslint index.js ./routes"
},
"dependencies": {
"@google-cloud/datastore": "^7.0.0",
"bcrypt": "^5.0.1",
"cors": "^2.8.5",
"dayjs": "^1.11.5",
"dotenv": "^16.0.1",
"express": "^4.18.1",
"punycode": "^2.1.1"
},
"devDependencies": {
"eslint": "^8.22.0",
"nodemon": "^2.0.19",
"parcel": "^2.7.0"
}
}

View file

@ -1,201 +0,0 @@
[
"adorable",
"adventurous",
"aggressive",
"agreeable",
"alert",
"alive",
"amused",
"angry",
"annoyed",
"annoying",
"anxious",
"arrogant",
"ashamed",
"attractive",
"average",
"beautiful",
"better",
"bewildered",
"blue",
"blushing",
"bored",
"brainy",
"brave",
"breakable",
"bright",
"busy",
"calm",
"careful",
"cautious",
"charming",
"cheerful",
"clean",
"clear",
"clever",
"cloudy",
"clumsy",
"colorful",
"comfortable",
"concerned",
"confused",
"cooperative",
"courageous",
"crazy",
"creepy",
"crowded",
"curious",
"cute",
"dangerous",
"dark",
"defiant",
"delightful",
"depressed",
"determined",
"different",
"difficult",
"disgusted",
"distinct",
"disturbed",
"dizzy",
"doubtful",
"drab",
"dull",
"eager",
"easy",
"elated",
"elegant",
"embarrassed",
"enchanting",
"encouraging",
"energetic",
"enthusiastic",
"envious",
"evil",
"excited",
"expensive",
"exuberant",
"fair",
"faithful",
"famous",
"fancy",
"fantastic",
"fierce",
"fine",
"foolish",
"fragile",
"frail",
"frantic",
"friendly",
"frightened",
"funny",
"gentle",
"gifted",
"glamorous",
"gleaming",
"glorious",
"good",
"gorgeous",
"graceful",
"grumpy",
"handsome",
"happy",
"healthy",
"helpful",
"hilarious",
"homely",
"hungry",
"important",
"impossible",
"inexpensive",
"innocent",
"inquisitive",
"itchy",
"jealous",
"jittery",
"jolly",
"joyous",
"kind",
"lazy",
"light",
"lively",
"lonely",
"long",
"lovely",
"lucky",
"magnificent",
"misty",
"modern",
"motionless",
"muddy",
"mushy",
"mysterious",
"naughty",
"nervous",
"nice",
"nutty",
"obedient",
"obnoxious",
"odd",
"old-fashioned",
"open",
"outrageous",
"outstanding",
"panicky",
"perfect",
"plain",
"pleasant",
"poised",
"powerful",
"precious",
"prickly",
"proud",
"puzzled",
"quaint",
"real",
"relieved",
"scary",
"selfish",
"shiny",
"shy",
"silly",
"sleepy",
"smiling",
"smoggy",
"sparkling",
"splendid",
"spotless",
"stormy",
"strange",
"successful",
"super",
"talented",
"tame",
"tasty",
"tender",
"tense",
"terrible",
"thankful",
"thoughtful",
"thoughtless",
"tired",
"tough",
"uninterested",
"unsightly",
"unusual",
"upset",
"uptight",
"vast",
"victorious",
"vivacious",
"wandering",
"weary",
"wicked",
"wide-eyed",
"wild",
"witty",
"worried",
"worrisome",
"zany",
"zealous"
]

View file

@ -1,47 +0,0 @@
[
"American Horseshoe",
"Atlantic Ghost",
"Baja Elbow",
"Big Claw Purple Hermit",
"Coldwater Mole",
"Cuata Swim",
"Deepwater Frog",
"Dwarf Teardrop",
"Elegant Hermit",
"Flat Spider",
"Ghost",
"Globe Purse",
"Green",
"Halloween",
"Harbor Spider",
"Inflated Spider",
"Left Clawed Hermit",
"Lumpy Claw",
"Magnificent Hermit",
"Mexican Spider",
"Mouthless Land",
"Northern Lemon Rock",
"Pacific Arrow",
"Pacific Mole",
"Paco Box",
"Panamic Spider",
"Purple Shore",
"Red Rock",
"Red Swim",
"Red-leg Hermit",
"Robust Swim",
"Rough Swim",
"Sand Swim",
"Sally Lightfoot",
"Shamed-face Box",
"Shamed-face Heart Box",
"Shell",
"Small Arched Box",
"Southern Kelp",
"Spotted Box",
"Striated Mole",
"Striped Shore",
"Tropical Mole",
"Walking Rock",
"Yellow Shore"
]

View file

@ -1,84 +0,0 @@
import dayjs from 'dayjs'
import punycode from 'punycode/'
import adjectives from '../res/adjectives.json'
import crabs from '../res/crabs.json'
const capitalize = string => string.charAt(0).toUpperCase() + string.slice(1)
// Generate a random name based on an adjective and a crab species
const generateName = () =>
`${capitalize(adjectives[Math.floor(Math.random() * adjectives.length)])} ${crabs[Math.floor(Math.random() * crabs.length)]} Crab`
// Generate a slug for the crab fit
const generateId = name => {
let id = punycode.encode(name.trim().toLowerCase()).trim().replace(/[^A-Za-z0-9 ]/g, '').replace(/\s+/g, '-')
if (id.replace(/-/g, '') === '') {
id = generateName().trim().toLowerCase().replace(/[^A-Za-z0-9 ]/g, '').replace(/\s+/g, '-')
}
const number = Math.floor(100000 + Math.random() * 900000)
return `${id}-${number}`
}
const createEvent = async (req, res) => {
const { event } = req.body
try {
const name = event.name.trim() === '' ? generateName() : event.name.trim()
let eventId = generateId(name)
const currentTime = dayjs().unix()
// Check if the event ID already exists, and if so generate a new one
let eventResult
do {
const query = req.datastore.createQuery(req.types.event)
.select('__key__')
.filter('__key__', req.datastore.key([req.types.event, eventId]))
eventResult = (await req.datastore.runQuery(query))[0][0]
if (eventResult !== undefined) {
eventId = generateId(name)
}
} while (eventResult !== undefined)
const entity = {
key: req.datastore.key([req.types.event, eventId]),
data: {
name: name,
created: currentTime,
times: event.times,
timezone: event.timezone,
},
}
await req.datastore.insert(entity)
res.status(201).send({
id: eventId,
name: name,
created: currentTime,
times: event.times,
timezone: event.timezone,
})
// Update stats
const eventCountResult = (await req.datastore.get(req.datastore.key([req.types.stats, 'eventCount'])))[0] || null
if (eventCountResult) {
await req.datastore.upsert({
...eventCountResult,
value: eventCountResult.value + 1,
})
} else {
await req.datastore.insert({
key: req.datastore.key([req.types.stats, 'eventCount']),
data: { value: 1 },
})
}
} catch (e) {
console.error(e)
res.status(400).send({ error: 'An error occurred while creating the event' })
}
}
export default createEvent

View file

@ -1,65 +0,0 @@
import dayjs from 'dayjs'
import bcrypt from 'bcrypt'
const createPerson = async (req, res) => {
const { eventId } = req.params
const { person } = req.body
try {
const event = (await req.datastore.get(req.datastore.key([req.types.event, eventId])))[0]
const query = req.datastore.createQuery(req.types.person)
.filter('eventId', eventId)
.filter('name', person.name)
const personResult = (await req.datastore.runQuery(query))[0][0]
if (event) {
if (person && personResult === undefined) {
const currentTime = dayjs().unix()
// If password
let hash = null
if (person.password) {
hash = await bcrypt.hash(person.password, 10)
}
const entity = {
key: req.datastore.key(req.types.person),
data: {
name: person.name.trim(),
password: hash,
eventId: eventId,
created: currentTime,
availability: [],
},
}
await req.datastore.insert(entity)
res.status(201).send({ success: 'Created' })
// Update stats
const personCountResult = (await req.datastore.get(req.datastore.key([req.types.stats, 'personCount'])))[0] || null
if (personCountResult) {
await req.datastore.upsert({
...personCountResult,
value: personCountResult.value + 1,
})
} else {
await req.datastore.insert({
key: req.datastore.key([req.types.stats, 'personCount']),
data: { value: 1 },
})
}
} else {
res.status(400).send({ error: 'Unable to create person' })
}
} else {
res.status(404).send({ error: 'Event does not exist' })
}
} catch (e) {
console.error(e)
res.status(400).send({ error: 'An error occurred while creating the person' })
}
}
export default createPerson

View file

@ -1,29 +0,0 @@
import dayjs from 'dayjs'
const getEvent = async (req, res) => {
const { eventId } = req.params
try {
const event = (await req.datastore.get(req.datastore.key([req.types.event, eventId])))[0]
if (event) {
res.send({
id: eventId,
...event,
})
// Update last visited time
await req.datastore.upsert({
...event,
visited: dayjs().unix()
})
} else {
res.status(404).send({ error: 'Event not found' })
}
} catch (e) {
console.error(e)
res.status(404).send({ error: 'Event not found' })
}
}
export default getEvent

View file

@ -1,20 +0,0 @@
const getPeople = async (req, res) => {
const { eventId } = req.params
try {
const query = req.datastore.createQuery(req.types.person).filter('eventId', eventId)
let people = (await req.datastore.runQuery(query))[0]
people = people.map(person => ({
name: person.name,
availability: person.availability,
created: person.created,
}))
res.send({ people })
} catch (e) {
console.error(e)
res.status(404).send({ error: 'Person not found' })
}
}
export default getPeople

View file

@ -1,10 +0,0 @@
export { default as stats } from './stats'
export { default as getEvent } from './getEvent'
export { default as createEvent } from './createEvent'
export { default as getPeople } from './getPeople'
export { default as createPerson } from './createPerson'
export { default as login } from './login'
export { default as updatePerson } from './updatePerson'
export { default as taskCleanup } from './taskCleanup'
export { default as taskRemoveOrphans } from './taskRemoveOrphans'

View file

@ -1,35 +0,0 @@
import bcrypt from 'bcrypt'
const login = async (req, res) => {
const { eventId, personName } = req.params
const { person } = req.body
try {
const query = req.datastore.createQuery(req.types.person)
.filter('eventId', eventId)
.filter('name', personName)
const personResult = (await req.datastore.runQuery(query))[0][0]
if (personResult) {
if (personResult.password) {
const passwordsMatch = person && person.password && await bcrypt.compare(person.password, personResult.password)
if (!passwordsMatch) {
return res.status(401).send({ error: 'Incorrect password' })
}
}
res.send({
name: personName,
availability: personResult.availability,
created: personResult.created,
})
} else {
res.status(404).send({ error: 'Person does not exist' })
}
} catch (e) {
console.error(e)
res.status(400).send({ error: 'An error occurred' })
}
}
export default login

View file

@ -1,29 +0,0 @@
import packageJson from '../package.json'
const stats = async (req, res) => {
let eventCount = null
let personCount = null
try {
const eventResult = (await req.datastore.get(req.datastore.key([req.types.stats, 'eventCount'])))[0] || null
const personResult = (await req.datastore.get(req.datastore.key([req.types.stats, 'personCount'])))[0] || null
if (eventResult) {
eventCount = eventResult.value
}
if (personResult) {
personCount = personResult.value
}
} catch (e) {
console.error(e)
}
res.send({
eventCount,
personCount,
version: packageJson.version,
})
}
export default stats

View file

@ -1,48 +0,0 @@
import dayjs from 'dayjs'
const taskCleanup = async (req, res) => {
if (req.header('X-Appengine-Cron') === undefined) {
return res.status(400).send({ error: 'This task can only be run from a cron job' })
}
const threeMonthsAgo = dayjs().subtract(3, 'month').unix()
console.log(`Running cleanup task at ${dayjs().format('h:mma D MMM YYYY')}`)
try {
// Fetch events that haven't been visited in over 3 months
const eventQuery = req.datastore.createQuery(req.types.event).filter('visited', '<', threeMonthsAgo)
const oldEvents = (await req.datastore.runQuery(eventQuery))[0]
if (oldEvents && oldEvents.length > 0) {
const oldEventIds = oldEvents.map(e => e[req.datastore.KEY].name)
console.log(`Found ${oldEventIds.length} events to remove`)
// Fetch availabilities linked to the events discovered
let peopleDiscovered = 0
await Promise.all(oldEventIds.map(async eventId => {
const peopleQuery = req.datastore.createQuery(req.types.person).filter('eventId', eventId)
const oldPeople = (await req.datastore.runQuery(peopleQuery))[0]
if (oldPeople && oldPeople.length > 0) {
peopleDiscovered += oldPeople.length
await req.datastore.delete(oldPeople.map(person => person[req.datastore.KEY]))
}
}))
await req.datastore.delete(oldEvents.map(event => event[req.datastore.KEY]))
console.log(`Cleanup successful: ${oldEventIds.length} events and ${peopleDiscovered} people removed`)
res.sendStatus(200)
} else {
console.log('Found 0 events to remove, ending cleanup')
res.sendStatus(404)
}
} catch (e) {
console.error(e)
res.sendStatus(404)
}
}
export default taskCleanup

View file

@ -1,48 +0,0 @@
import dayjs from 'dayjs'
const taskRemoveOrphans = async (req, res) => {
if (req.header('X-Appengine-Cron') === undefined) {
return res.status(400).send({ error: 'This task can only be run from a cron job' })
}
const threeMonthsAgo = dayjs().subtract(3, 'month').unix()
console.log(`Running orphan removal task at ${dayjs().format('h:mma D MMM YYYY')}`)
try {
// Fetch people that are older than 3 months
const peopleQuery = req.datastore.createQuery(req.types.person).filter('created', '<', threeMonthsAgo)
const oldPeople = (await req.datastore.runQuery(peopleQuery))[0]
if (oldPeople && oldPeople.length > 0) {
console.log(`Found ${oldPeople.length} people older than 3 months, checking for events`)
// Fetch events linked to the people discovered
let peopleWithoutEvents = 0
await Promise.all(oldPeople.map(async person => {
const event = (await req.datastore.get(req.datastore.key([req.types.event, person.eventId])))[0]
if (!event) {
peopleWithoutEvents++
await req.datastore.delete(person[req.datastore.KEY])
}
}))
if (peopleWithoutEvents > 0) {
console.log(`Orphan removal successful: ${peopleWithoutEvents} people removed`)
res.sendStatus(200)
} else {
console.log('Found 0 people without events, ending orphan removal')
res.sendStatus(404)
}
} else {
console.log('Found 0 people older than 3 months, ending orphan removal')
res.sendStatus(404)
}
} catch (e) {
console.error(e)
res.sendStatus(404)
}
}
export default taskRemoveOrphans

View file

@ -1,40 +0,0 @@
import bcrypt from 'bcrypt'
const updatePerson = async (req, res) => {
const { eventId, personName } = req.params
const { person } = req.body
try {
const query = req.datastore.createQuery(req.types.person)
.filter('eventId', eventId)
.filter('name', personName)
const personResult = (await req.datastore.runQuery(query))[0][0]
if (personResult) {
if (person && person.availability) {
if (personResult.password) {
const passwordsMatch = person.password && await bcrypt.compare(person.password, personResult.password)
if (!passwordsMatch) {
return res.status(401).send({ error: 'Incorrect password' })
}
}
await req.datastore.upsert({
...personResult,
availability: person.availability,
})
res.status(200).send({ success: 'Updated' })
} else {
res.status(400).send({ error: 'Availability must be set' })
}
} else {
res.status(404).send({ error: 'Person not found' })
}
} catch (e) {
console.error(e)
res.status(400).send('An error occurred')
}
}
export default updatePerson

View file

@ -1,245 +0,0 @@
swagger: "2.0"
info:
title: "Crab Fit"
description: "Compare and align schedules to find a time that works for everyone"
version: "1.0.0"
host: "api-dot-crabfit.appspot.com"
x-google-endpoints:
- name: "api-dot-crabfit.appspot.com"
allowCors: true
schemes:
- "https"
produces:
- "application/json"
definitions:
Event:
type: "object"
properties:
id:
type: "string"
name:
type: "string"
timezone:
type: "string"
created:
type: "integer"
times:
type: "array"
items:
type: "string"
Person:
type: "object"
properties:
name:
type: "string"
availability:
type: "array"
items:
type: "string"
created:
type: "integer"
paths:
"/stats":
get:
summary: "Return stats for crabfit"
operationId: "getStats"
responses:
200:
description: "OK"
schema:
type: "object"
properties:
eventCount:
type: "integer"
personCount:
type: "integer"
version:
type: "string"
"/event/{eventId}":
get:
summary: "Return an event details"
operationId: "getEvent"
parameters:
- in: "path"
name: "eventId"
required: true
type: "string"
description: "The ID of the event"
responses:
200:
description: "OK"
schema:
$ref: '#/definitions/Event'
404:
description: "Not found"
"/event":
post:
summary: "Create a new event"
operationId: "postEvent"
parameters:
- in: "body"
name: "event"
required: true
schema:
type: "object"
properties:
name:
type: "string"
timezone:
type: "string"
times:
type: "array"
items:
type: "string"
description: "New event details"
responses:
201:
description: "Created"
schema:
$ref: '#/definitions/Event'
400:
description: "Invalid data"
"/event/{eventId}/people":
get:
summary: "Get availabilities for an event"
operationId: "getPeople"
parameters:
- in: "path"
name: "eventId"
required: true
type: "string"
description: "The ID of the event"
responses:
200:
description: "OK"
schema:
type: "object"
properties:
people:
type: "array"
items:
$ref: "#/definitions/Person"
404:
description: "Not found"
post:
summary: "Add a new person to the event"
operationId: "postPeople"
parameters:
- in: "path"
name: "eventId"
required: true
type: "string"
description: "The ID of the event"
- in: "body"
name: "person"
required: true
schema:
type: "object"
properties:
name:
type: "string"
password:
type: "string"
description: "New person details"
responses:
201:
description: "Created"
404:
description: "Not found"
400:
description: "Invalid data"
"/event/{eventId}/people/{personName}":
post:
summary: "Login as this person"
operationId: "getPerson"
parameters:
- in: "path"
name: "eventId"
required: true
type: "string"
description: "The ID of the event"
- in: "path"
name: "personName"
required: true
type: "string"
description: "The name of the person"
- in: "body"
name: "person"
required: false
schema:
type: "object"
properties:
password:
type: "string"
description: "Login details"
responses:
200:
description: "OK"
schema:
$ref: "#/definitions/Person"
401:
description: "Incorrect password"
404:
description: "Not found"
patch:
summary: "Update this person's availabilities"
operationId: "patchPerson"
parameters:
- in: "path"
name: "eventId"
required: true
type: "string"
description: "The ID of the event"
- in: "path"
name: "personName"
required: true
type: "string"
description: "The name of the person"
- in: "body"
name: "person"
required: true
schema:
type: "object"
properties:
password:
type: "string"
availability:
type: "array"
items:
type: "string"
description: "Updated person details"
responses:
200:
description: "OK"
401:
description: "Incorrect password"
404:
description: "Not found"
400:
description: "Invalid data"
"/tasks/cleanup":
get:
summary: "Delete events inactive for more than 3 months"
operationId: "taskCleanup"
tags:
- tasks
responses:
200:
description: "OK"
404:
description: "Not found"
400:
description: "Not called from a cron job"
"/tasks/removeOrphans":
get:
summary: "Deletes people if the event they were created under no longer exists"
operationId: "taskRemoveOrphans"
tags:
- tasks
responses:
200:
description: "OK"
404:
description: "Not found"
400:
description: "Not called from a cron job"

File diff suppressed because it is too large Load diff

View file

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 104 KiB

Some files were not shown because too many files have changed in this diff Show more