diff --git a/.ameba.yml b/.ameba.yml
new file mode 100644
index 00000000000..0569b1c3886
--- /dev/null
+++ b/.ameba.yml
@@ -0,0 +1,63 @@
+# This configuration file was generated by `ameba --gen-config`
+# on 2019-10-15 23:45:35 UTC using Ameba version 0.10.1.
+# The point is for the user to remove these configuration records
+# one by one as the reported problems are removed from the code base.
+
+# Run `ameba --only Metrics/CyclomaticComplexity` for details
+Metrics/CyclomaticComplexity:
+ Description: Disallows methods with a cyclomatic complexity higher than `MaxComplexity`
+ MaxComplexity: 12
+ Enabled: true
+ Severity: Convention
+
+# Run `ameba --only Lint/ShadowingOuterLocalVar` for details
+Lint/ShadowingOuterLocalVar:
+ Description: Disallows the usage of the same name as outer local variables for block
+ or proc arguments.
+ Enabled: true
+ Severity: Warning
+
+# Run `ameba --only Style/ConstantNames` for details
+Style/ConstantNames:
+ Description: Enforces constant names to be in screaming case
+ Enabled: False
+ Severity: Convention
+
+# Run `ameba --only Style/UnlessElse` for details
+Style/UnlessElse:
+ Description: Disallows the use of an `else` block with the `unless`
+ Enabled: true
+ Severity: Convention
+
+# Run `ameba --only Lint/UnusedArgument` for details
+Lint/UnusedArgument:
+ Description: Disallows unused arguments
+ IgnoreDefs: true
+ IgnoreBlocks: false
+ IgnoreProcs: false
+ Enabled: true
+ Severity: Warning
+
+# Run `ameba --only Lint/UselessAssign` for details
+Lint/UselessAssign:
+ Description: Disallows useless variable assignments
+ Enabled: true
+ Severity: Warning
+ Excluded:
+ - repositories/private_drivers/lib/redis/spec/redis_spec.cr
+ - repositories/private_drivers/lib/pool/test/pool_test.cr
+ - repositories/private_drivers/lib/pool/test/connection_pool_test.cr
+
+# Run `ameba --only Lint/UnreachableCode` for details
+Lint/UnreachableCode:
+ Description: Reports unreachable code
+ Enabled: true
+ Severity: Warning
+ Excluded:
+ - repositories/private_drivers/lib/driver/src/driver/protocol/management.cr
+
+# Run `ameba --only Style/VariableNames` for details
+Style/VariableNames:
+ Description: Enforces variable names to be in underscored case
+ Enabled: true
+ Severity: Convention
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
new file mode 100644
index 00000000000..abad59c9f43
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -0,0 +1,33 @@
+---
+name: Bug report
+about: Create a report to help us improve
+title: 'Bug: A concise description of the behaviour'
+labels: bug
+assignees: ''
+
+---
+
+**Describe the bug**
+
+A clear and concise description of what the bug is.
+
+**To Reproduce**
+
+Steps to reproduce the behaviour or a minimal code snippet that demonstrates the behaviour.
+
+**Expected behaviour**
+
+A clear and concise description of what you expected to happen.
+
+**Screenshots or a paste of terminal output**
+
+If applicable, add screenshots to help explain your problem.
+
+**Versions (please complete the following information):**
+
+- Output of `$ crystal version`
+- Driver version [e.g. 3.x]
+
+**Additional context**
+
+Add any other context about the problem here.
diff --git a/.github/ISSUE_TEMPLATE/driver_migration.md b/.github/ISSUE_TEMPLATE/driver_migration.md
new file mode 100644
index 00000000000..bc50ed19207
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/driver_migration.md
@@ -0,0 +1,20 @@
+---
+name: Driver Migration
+about: Migrate existing Ruby Engine Driver to Crystal
+title: 'Driver Migration: Migrate existing Ruby driver'
+labels: driver
+assignees: ''
+
+---
+
+**Driver to be Migrated**
+
+Information about the driver to be migrated.
+
+**Link to Existing Driver**
+
+Link to existing Driver on Ruby Drivers Repo.
+
+**Additional context**
+
+Add any other context about the problem here.
diff --git a/.github/ISSUE_TEMPLATE/driver_request.md b/.github/ISSUE_TEMPLATE/driver_request.md
new file mode 100644
index 00000000000..b68b3c805a5
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/driver_request.md
@@ -0,0 +1,32 @@
+---
+name: Driver Request
+about: Request a new driver to be created
+title: 'Driver Request: Information required to create a new driver'
+labels: driver
+assignees: ''
+
+---
+
+**Driver Type**
+
+Logic/Device/SSH/Websocket
+
+**Manufacturer**
+
+Manufacturer of device, software or service
+
+**Model/Service**
+
+Model or Service
+
+**Link to or Attach Device API or Protocol**
+
+If applicable, add screenshots to help explain your problem.
+
+**Describe any desired functionality**
+
+- Control all aspects of device
+
+**Additional context**
+
+Add any other context about the driver request here.
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md
new file mode 100644
index 00000000000..01f460a18d6
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.md
@@ -0,0 +1,24 @@
+---
+name: Feature request
+about: Suggest an idea for this project
+title: 'RFC: Concise description of desired feature'
+labels: ''
+assignees: ''
+
+---
+
+**Is your feature request related to a problem? Please describe.**
+
+A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
+
+**Describe the solution you'd like**
+
+A clear and concise description of what you want to happen.
+
+**Describe alternatives you've considered**
+
+A clear and concise description of any alternative solutions or features you've considered.
+
+**Additional context**
+
+Add any other context or screenshots about the feature request here.
diff --git a/.github/workflows/crystal.yml b/.github/workflows/crystal.yml
new file mode 100644
index 00000000000..597fcda80ae
--- /dev/null
+++ b/.github/workflows/crystal.yml
@@ -0,0 +1,21 @@
+name: Crystal CI
+
+on:
+ push:
+ branches: [ master ]
+ pull_request:
+ branches: [ master ]
+
+jobs:
+ style:
+ runs-on: ubuntu-latest
+ container:
+ image: crystallang/crystal
+ steps:
+ - uses: actions/checkout@v2
+ - name: Format
+ run: crystal tool format
+ - name: Lint
+ uses: crystal-ameba/github-action@v0.2.6
+ env:
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
diff --git a/.gitignore b/.gitignore
index 0792935e4a3..cb138427422 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,3 +4,7 @@ lib
.shards
app
*.dwarf
+repositories/*
+bin
+.DS_Store
+*.rdb
diff --git a/.travis.yml b/.travis.yml
index ffc7b6ac56d..b296a2bc177 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1 +1,10 @@
+dist: xenial
+sudo: required
+
language: crystal
+install:
+ - docker-compose up -d
+ - sleep 10
+script:
+ - docker exec -it drivers crystal spec
+ - docker exec -it drivers /src/bin/report
diff --git a/.vscode/launch.json b/.vscode/launch.json
new file mode 100644
index 00000000000..c1424e57eb7
--- /dev/null
+++ b/.vscode/launch.json
@@ -0,0 +1,16 @@
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "Debug",
+ "type": "gdb",
+ "request": "launch",
+ "target": "./bin/test-harness",
+ "cwd": "${workspaceRoot}",
+ "preLaunchTask": "Compile",
+ "setupCommands": [
+ { "text": "-gdb-set follow-fork-mode child" }
+ ]
+ }
+ ]
+}
diff --git a/.vscode/tasks.json b/.vscode/tasks.json
new file mode 100644
index 00000000000..ce3aa5cfd9f
--- /dev/null
+++ b/.vscode/tasks.json
@@ -0,0 +1,10 @@
+{
+ "version": "2.0.0",
+ "tasks": [
+ {
+ "label": "Compile",
+ "command": "shards build --debug drivers",
+ "type": "shell"
+ }
+ ]
+}
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 00000000000..775184570e7
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,21 @@
+FROM crystallang/crystal:1.0.0-alpine
+COPY . /src
+WORKDIR /src
+
+# Install the latest version of LibSSH2 and the GDB debugger
+RUN apk update
+RUN apk add --no-cache libssh2 libssh2-dev libssh2-static iputils gdb
+RUN apk add --update yaml-static
+
+# Add trusted CAs for communicating with external services
+RUN apk update && apk add --no-cache ca-certificates tzdata && update-ca-certificates
+
+# Build App
+RUN rm -rf lib bin
+RUN mkdir -p /src/bin/drivers
+RUN shards build --error-trace --production --ignore-crystal-version
+
+# Run the app binding on port 8080
+EXPOSE 8080
+ENTRYPOINT ["/src/bin/test-harness"]
+CMD ["/src/bin/test-harness", "-b", "0.0.0.0", "-p", "8080"]
diff --git a/README.md b/README.md
index 1bd8efef9e8..4d47a69e423 100644
--- a/README.md
+++ b/README.md
@@ -1,41 +1,15 @@
-# Spider-Gazelle Application Template
+# PlaceOS Drivers
-[](https://travis-ci.org/spider-gazelle/spider-gazelle)
+[](https://travis-ci.org/placeos/drivers)
-Clone this repository to start building your own spider-gazelle based application
+Manage and test [PlaceOS](https://place.technology) drivers.
-## Documentation
+## Development
-* [Action Controller](https://github.com/spider-gazelle/action-controller) base class for building [Controllers](http://guides.rubyonrails.org/action_controller_overview.html)
-* [Active Model](https://github.com/spider-gazelle/active-model) base class for building [ORMs](https://en.wikipedia.org/wiki/Object-relational_mapping)
-* [Habitat](https://github.com/luckyframework/habitat) configuration and settings for Crystal projects
-* [router.cr](https://github.com/tbrand/router.cr) base request handling
-* [Radix](https://github.com/luislavena/radix) Radix Tree implementation for request routing
-* [HTTP::Server](https://crystal-lang.org/api/latest/HTTP/Server.html) built-in Crystal Lang HTTP server
- * Request
- * Response
- * Cookies
- * Headers
- * Params etc
+To spin up the test harness, clone the repository and run...
+```bash
+$ docker-compose up -d
+```
-Spider-Gazelle builds on the amazing performance of **router.cr** [here](https://github.com/tbrand/which_is_the_fastest).:rocket:
-
-
-## Testing
-
-`crystal spec`
-
-* to run in development mode `crystal ./src/app.cr`
-
-## Compiling
-
-`crystal build ./src/app.cr`
-
-### Deploying
-
-Once compiled you are left with a binary `./app`
-
-* for help `./app --help`
-* viewing routes `./app --routes`
-* run on a different port or host `./app -h 0.0.0.0 -p 80`
+Point a browser to [localhost:8085](http://localhost:8085), and you're good to go.
diff --git a/docker-compose.yml b/docker-compose.yml
new file mode 100644
index 00000000000..327c0fd7809
--- /dev/null
+++ b/docker-compose.yml
@@ -0,0 +1,29 @@
+version: "3.7"
+services:
+ redis:
+ image: eqalpha/keydb
+ restart: always
+ hostname: redis
+ environment:
+ - TZ=$TZ
+
+ drivers:
+ build: .
+ image: placeos/drivers
+ restart: always
+ container_name: drivers
+ hostname: drivers
+ environment:
+ - CRYSTAL_PATH=lib:/lib/local-shards
+ depends_on:
+ - redis
+ ports:
+ - 127.0.0.1:8085:8080
+ - 127.0.0.1:4444:4444
+ volumes:
+ - ./drivers/:/src/drivers/
+ - ./repositories/:/src/repositories/
+ - ./lib/:/lib/local-shards/
+ environment:
+ - REDIS_URL=redis://redis:6379
+ - TZ=$TZ
diff --git a/docs/directory_structure.md b/docs/directory_structure.md
new file mode 100644
index 00000000000..88355f1f49b
--- /dev/null
+++ b/docs/directory_structure.md
@@ -0,0 +1,20 @@
+# Directory Structures
+
+PlaceOS core / drivers makes the assumption that the working directory one level
+up from the scratch directory. An example deployment structure:
+
+* Working dir: `/home/placeos/core`
+* Executable: `/home/placeos/core/bin/core`
+* Driver repositories: `/home/placeos/repositories`
+ * PlaceOS Drivers: `/home/placeos/repositories/drivers`
+* Driver executables: `/home/placeos/core/bin/drivers`
+ * Samsung driver: `/home/placeos/core/bin/drivers/353b53_samsung_display_md_series_cr`
+
+However when developing the structure will look more like:
+
+* Working dir: `/home/steve/drivers`
+* Driver repository: `/home/steve/drivers`
+* Driver executables: `/home/steve/drivers/bin/drivers`
+ * Samsung driver: `/home/placeos/core/bin/drivers/353b53_samsung_display_md_series_cr`
+
+The primary difference between production and development is PlaceOS core, in production, will be cloning repositories and installing shards as required.
diff --git a/docs/gdb-entitlement.xml b/docs/gdb-entitlement.xml
new file mode 100644
index 00000000000..9d9251f55d9
--- /dev/null
+++ b/docs/gdb-entitlement.xml
@@ -0,0 +1,10 @@
+
+
+
+
+ com.apple.security.cs.debugger
+
+
+
+
+
diff --git a/docs/http-api.md b/docs/http-api.md
new file mode 100644
index 00000000000..9e908404b81
--- /dev/null
+++ b/docs/http-api.md
@@ -0,0 +1,154 @@
+# HTTP API
+
+Primarily for development.
+
+
+## GET /build
+
+Returns the list of available drivers
+
+* `repository=folder_name` (optional) if you wish to specify a third party repository
+* `compiled=true` (optional) if you only want the list of compiled drivers
+
+```json
+
+["drivers/place/spec_helper.cr", "..."]
+```
+
+
+### GET /build/repositories
+
+Returns the list of 3rd party repositories
+
+```json
+
+["private_drivers", "..."]
+```
+
+
+### GET /build/repository_commits
+
+Returns the list of available commits at the repository level
+
+* `repository=folder_name` (optional) if you wish to specify a third party repository
+* `count=50` (optional) if you want more or less commits
+
+```json
+
+{
+ "commit": "01519d6",
+ "date": "2019-06-02T23:59:22+10:00",
+ "author": "Stephen von Takach",
+ "subject": "implement websocket spec runner"
+}
+```
+
+
+### GET /build/{{escaped driver path}}
+
+Returns the list of compiled versions of the specified file are available
+
+```json
+
+["private_drivers_cr_01519d6", "..."]
+```
+
+
+### GET /build/{{escaped driver path}}/commits
+
+Returns the list of available commits for the current driver
+
+* `repository=folder_name` (optional) if you wish to specify a third party repository
+* `count=50` (optional) if you want more or less commits
+
+```json
+
+{
+ "commit": "01519d6",
+ "date": "2019-06-02T23:59:22+10:00",
+ "author": "Stephen von Takach",
+ "subject": "implement websocket spec runner"
+}
+```
+
+
+### POST /build
+
+compiles a driver
+
+* `driver=drivers/path.cr` (required) the path to the driver
+* `commit=01519d6` (optional) defaults to head
+
+
+### DELETE /build/{{escaped driver path}}
+
+deletes compiled versions of a driver
+
+* `repository=folder_name` (optional) if you wish to specify a third party repository
+* `commit=01519d6` (optional) deletes all versions of a driver if not specified
+
+
+## GET /test
+
+Lists the available specs
+
+```json
+
+["drivers/place/spec_helper_spec.cr", "..."]
+```
+
+
+### GET /test/{{escaped spec path}}/commits
+
+Returns the list of available commits for the specified spec
+
+* `repository=folder_name` (optional) if you wish to specify a third party repository
+* `count=50` (optional) if you want more or less commits
+
+```json
+
+{
+ "commit": "01519d6",
+ "date": "2019-06-02T23:59:22+10:00",
+ "author": "Stephen von Takach",
+ "subject": "implement websocket spec runner"
+}
+```
+
+
+### POST /test
+
+Compiles and runs a spec and returns the output
+
+* `repository=folder_name` (optional) if you wish to specify a third party repository
+* `driver=drivers/path/to/file.cr` (required) the driver you want to test
+* `spec=drivers/path/to/file_spec.cr` (required) the spec you want to run on the driver
+* `commit=01519d6` (optional) the commit you would like the driver to be running at
+* `spec_commit=01519d6` (optional) the commit you would like the spec to be running at
+* `force=true` (optional) forces a re-compilation of the driver and spec
+* `debug=true` (optional) compiles the files with debugging symbols
+
+```text
+Launching spec runner
+Launching driver: /Users/steve/Documents/projects/placeos/drivers/bin/drivers/drivers_place_private_helper_cr_4f6e0cd
+... starting driver IO services
+... starting module
+... waiting for module
+... module connected
+... enabling debug output
+... starting spec
+... spec complete
+... terminating driver gracefully
+Driver terminated with: 0
+
+
+Finished in 15.65 milliseconds
+0 examples, 0 failures, 0 errors, 0 pending
+
+spec runner exited with 0
+```
+
+
+### WebSocket /test/run_spec
+
+Same requirements as `POST /test` above however it streams the response
diff --git a/docs/runtime-debugging.md b/docs/runtime-debugging.md
new file mode 100644
index 00000000000..9fe51783978
--- /dev/null
+++ b/docs/runtime-debugging.md
@@ -0,0 +1,195 @@
+# Runtime Debugging
+
+This is supported via VS Code on OSX or Linux platforms.
+It might be possible to do remote debugging on Windows in conjunction with the Linux Layer.
+
+* Requires [VS Code](https://code.visualstudio.com/)
+ * install [Crystal Lang](https://marketplace.visualstudio.com/items?itemName=faustinoaq.crystal-lang) extension
+ * install [Native Debug](https://marketplace.visualstudio.com/items?itemName=webfreak.debug) extension
+* Requires [GDB](https://www.gnu.org/software/gdb/)
+ * On OSX install using [Homebrew](https://brew.sh/)
+ * Then code sign the executable: https://sourceware.org/gdb/wiki/PermissionsDarwin
+ * The `gdb-entitlement.xml` file is in this folder
+ * When creating the signing certificate follow [this guide](https://apple.stackexchange.com/questions/309017/unknown-error-2-147-414-007-on-creating-certificate-with-certificate-assist)
+
+This should also work with [LLDB](https://lldb.llvm.org/) on OSX however [has issues](https://github.com/crystal-lang/crystal/issues/4457).
+
+
+## Debug on VSCode
+
+By convention the project directory name is the same as your application name, if you have changed it, please update `${workspaceFolderBasename}` with the name configured inside `shards.yml`
+
+### 1. `tasks.json` configuration to compile a crystal project
+
+```javascript
+{
+ "version": "2.0.0",
+ "tasks": [
+ {
+ "label": "Compile",
+ "command": "shards build --debug ${workspaceFolderBasename}",
+ "type": "shell"
+ }
+ ]
+}
+```
+
+### 2. `launch.json` configuration to debug a binary
+
+#### Using GDB
+
+```javascript
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "Debug",
+ "type": "gdb",
+ "request": "launch",
+ "target": "./bin/${workspaceFolderBasename}",
+ "cwd": "${workspaceRoot}",
+ "preLaunchTask": "Compile"
+ }
+ ]
+}
+```
+
+#### Using LLDB
+
+```javascript
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "Debug",
+ "type": "lldb-mi",
+ "request": "launch",
+ "target": "./bin/${workspaceFolderBasename}",
+ "cwd": "${workspaceRoot}",
+ "preLaunchTask": "Compile"
+ }
+ ]
+}
+```
+
+### 3. Then hit the DEBUG green play button
+
+
+
+## Tips and Tricks for debugging Crystal applications
+
+### 1. Use debugger keyword
+
+Instead of putting breakpoints using commands inside GDB or LLDB you can try to set a breakpoint using `debugger` keyword.
+
+```ruby
+i = 0
+while i < 3
+ i += 1
+ debugger # => breakpoint
+end
+```
+
+### 2. Avoid breakpoints inside blocks
+
+Currently, Crystal lacks support for debugging inside of blocks. If you put a breakpoint inside a block, it will be ignored.
+
+As a workaround, use `pp` to pretty print objects inside of blocks.
+
+```ruby
+3.times do |i|
+ pp i
+end
+# i => 1
+# i => 2
+# i => 3
+```
+
+### 3. Try `@[NoInline]` to debug arguments data
+
+Sometimes crystal will optimize argument data, so the debugger will show `` instead of the arguments. To avoid this behavior use the `@[NoInline]` attribute before your function implementation.
+
+```ruby
+@[NoInline]
+def foo(bar)
+ debugger
+end
+```
+
+### 4. Printing strings objects \(GDB\)
+
+To print string objects in the debugger:
+
+First, setup the debugger with the `debugger` statement:
+
+```ruby
+foo = "Hello World!"
+debugger
+```
+
+Then use `print` in the debugging console.
+
+```bash
+(gdb) print &foo.c
+$1 = (UInt8 *) 0x10008e6c4 "Hello World!"
+```
+
+Or add `&foo.c` using a new variable entry on watch section in VSCode debugger
+
+
+
+### 5. Printing array variables
+
+To print array items in the debugger:
+
+First, setup the debugger with the `debugger` statement:
+
+```ruby
+foo = ["item 0", "item 1", "item 2"]
+debugger
+```
+
+Then use `print` in the debugging console:
+
+```bash
+(gdb) print &foo.buffer[0].c
+$19 = (UInt8 *) 0x10008e7f4 "item 0"
+```
+
+Change the buffer index for each item you want to print.
+
+### 6. Printing instance variables
+
+For printing `@foo` var in this code:
+
+```ruby
+class Bar
+ @foo = 0
+ def baz
+ debugger
+ end
+end
+
+Bar.new
+```
+
+You can use `self.foo` in the debugger terminal or VSCode GUI.
+
+### 7. Print hidden objects
+
+Some objects do not show at all. You can unhide them using the `.to_s` method and a temporary debugging variable, like this:
+
+```ruby
+def bar(hello)
+ "#{hello} World!"
+end
+
+def foo(hello)
+ bar_hello_to_s = bar(hello).to_s
+ debugger
+end
+
+foo("Hello")
+```
+
+This trick allows showing the `bar_hello_to_s` variable inside the debugger tool.
diff --git a/docs/setup.md b/docs/setup.md
new file mode 100644
index 00000000000..7c6cd358d15
--- /dev/null
+++ b/docs/setup.md
@@ -0,0 +1,37 @@
+# Setup
+
+This allows you to build and test drivers without installing or running the complete PlaceOS service.
+
+1. clone the drivers repository: `git clone https://github.com/placeos/drivers drivers`
+2. clone private repositories here: `mkdir ./drivers/repositories`
+
+
+## OSX
+
+Install [Homebrew](https://brew.sh/) to install dependencies
+
+* Install [Crystal Lang](https://crystal-lang.org/reference/installation/): `brew install crystal`
+* Install libssh2: `brew install libssh2`
+* Install redis: `brew install redis`
+
+Ensure the following lines are in your `.bashrc` file
+
+```shell
+export PATH="/usr/local/opt/llvm/bin:$PATH"
+export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/opt/openssl/lib/pkgconfig
+```
+
+
+## Running Specs
+
+1. Ensure redis is running: `redis-server`
+2. Install dependencies: `cd drivers; shards update`
+3. Launch application: `crystal run ./src/app.cr`
+4. Browse to: http://localhost:3000/
+
+Now you can build drivers and run specs:
+
+* Build a drvier or spec: `curl -X POST "http://localhost:3000/build?driver=drivers/helvar/net.cr"`
+* Run a spec: `curl -X POST "http://localhost:3000/test?driver=drivers/lutron/lighting.cr&spec=drivers/lutron/lighting_spec.cr"`
+
+To build or test against drivers in private repositories include the repository param: `repository=private_drivers`
diff --git a/docs/writing-a-driver.md b/docs/writing-a-driver.md
new file mode 100644
index 00000000000..67247581c4a
--- /dev/null
+++ b/docs/writing-a-driver.md
@@ -0,0 +1,490 @@
+# How to write a driver
+
+There are three kind of drivers
+
+* Streaming IO (TCP, SSH, UDP, Multicast ect)
+* HTTP Client
+* Logic
+
+From a driver structure standpoint there is no difference between these types.
+
+* The same driver can be used over a TCP, UDP or SSH transport.
+* All drivers support HTTP methods if a URI endpoint is defined.
+* If a driver is associated with a System then it has access to logic helpers
+
+However typically a driver will only implement one of these interfaces.
+
+
+## Concepts
+
+Backing a driver is few different pieces that make it function.
+
+* Queue
+* Transport
+* Subscriptions
+* Scheduler
+* Settings
+* Logger
+* Metadata
+* Security
+* Interfaces
+
+
+### Queue
+
+The queue is a list of potentially asynchronous tasks that should be performed in a sequence.
+
+* Each task has a priority (defaults to `50`) - higher priority tasks run first
+* Tasks can be named. If a new task is added with the same name it replaces the existing task.
+* Tasks have a timeout (defaults to `5.seconds`)
+* Tasks can be retried (defaults to `3` before failing)
+
+Tasks have a callback that is used to run the task
+
+```crystal
+
+# => you can set queue defaults globally
+
+# set a delay between the current task completing and the next task
+queue.delay = 1.second
+queue.retries = 5
+
+queue(priority: 20, timeout: 1.second) do |task|
+ # perform action here
+
+ # signal result
+ task.success("optional success value")
+ task.abort("optional failure message")
+ task.retry
+
+ # Give me more time to complete the task
+ task.reset_timers
+end
+
+```
+
+In most cases you won't need to use the queue explicitly however it is good to understand that it is there and how it functions.
+
+
+### Transport
+
+The transport loaded is defined by settings in the database.
+
+#### Streaming IO
+
+You should always tokenise your streams.
+This can be handled automatically by the [built in tokeniser](https://github.com/spider-gazelle/tokenizer)
+
+```crystal
+
+def on_load
+ transport.tokenizer = Tokenizer.new("\r\n")
+end
+
+```
+
+There are a few ways to use streaming IO methods:
+
+1. send and receive
+
+```crystal
+
+def perform_action
+ # You call send with some data.
+ # you can also optionally pass some queue options to the function
+ send("message data", priority: 30, name: "generic-message")
+end
+
+# A common received function for handling responses
+def received(data, task)
+ # data is always `Bytes`
+ # task is always `PlaceOS::Driver::Task?` (i.e. could be nil if no active task)
+
+ # convert data into the appropriate format
+ data = String.new(data)
+
+ # decide if the request was a success or not
+ # you can pass any value that is JSON serialisable to success
+ # (if it can't be serialised then nil is sent)
+ task.try &.success(data)
+end
+
+```
+
+2. send and callback
+
+```crystal
+
+def perform_action
+ request = "build request"
+
+ send(request, priority: 30, name: "generic-message") do |data, task|
+ data = String.new(data)
+
+ # process response here (might need to know the request context)
+
+ task.try &.success(data)
+ end
+end
+
+```
+
+3. send immediately (no queuing)
+
+```crystal
+
+def perform_action_now!
+ transport.send("no queue")
+end
+
+```
+
+You can also add a pre-processor to data coming in. This can be useful
+if you want to strip away a protocol layer i.e. you are communicating
+over Telnet and want to remove the telnet signals leaving the raw
+comms for tokenising
+
+```crystal
+
+def on_load
+ transport.pre_processor do |bytes|
+ # you must return some byte data or nil if no processing is required
+ # tokenisation occurs on the data returned here
+ bytes[1..-2]
+ end
+end
+
+def received(data, task)
+ # data coming in here is both pre_processed and tokenised
+end
+
+```
+
+
+#### HTTP Client
+
+All drivers have built in methods for performing HTTP requests.
+
+* For streaming IO devices this defaults to `http://device.ip.address` or `https` if the transport is using TLS / SSH.
+* All devices can provide a custom HTTP base URI.
+
+There are methods for all the typical HTTP verbs: get, post, put, patch, delete
+
+```crystal
+
+def perform_action
+ basic_auth = "Basic #{Base64.strict_encode("#{@username}:#{@password}")}"
+
+ response = post("/v1/message/path", body: {
+ messages: numbers,
+ }.to_json, headers: {
+ "Authorization" => basic_auth,
+ "Content-Type" => "application/json",
+ "Accept" => "application/json",
+ }, params: {
+ "key" => "value"
+ })
+
+ raise "request failed with #{response.status_code}" unless (200...300).include?(data.status_code)
+end
+
+```
+
+
+#### Special SSH methods
+
+SSH connections will attempt to open a shell to the remote device however sometimes you may be able to execute operations independently.
+
+```crystal
+
+def perform_action
+ # if the application launched supports input you can use the bidirectional IO
+ # to communicate with the app
+ io = exec("command")
+end
+
+```
+
+
+#### Logic drivers
+
+The main difference between logic drivers and other transports is that a logic module is directly associated with a System and cannot be shared. (all other drivers can appear in multiple systems)
+
+* You can access remote modules in the system via the `system` helper
+
+```crystal
+
+# Get a system proxy
+sys = system
+sys.name #=> "Name of system"
+sys.email #=> "resource@email.address"
+sys.capacity #=> 12
+sys.bookable #=> true
+sys.id #=> "sys-tem~id"
+sys.modules #=> ["Array", "Of", "Unique", "Module", "Names", "In", "System"]
+sys.count("Module") #=> 3
+sys.implementing(PlaceOS::Driver::Interface::Powerable) #=> ["Camera", "Display"]
+
+# Look at status on a remote module
+system[:Display][:power] #=> true
+
+# Access a different module index
+system[:Display_2][:power]
+system.get(:Display, 2)[:power]
+
+# Access all modules of a type
+system.all(:Display)
+
+# Check if a module exists
+system.exists?(:Display) #=> true
+system.exists?(:Display_2) #=> false
+
+```
+
+you can bind to state in remote modules
+
+```crystal
+
+bind Display_1, :power, :power_changed
+
+private def power_changed(subscription, new_value)
+ logger.debug new_value
+end
+
+
+# you can also bind to internal state (available in all drivers)
+bind :power, :power_changed
+
+```
+
+It's also possible to create shortcuts to other modules.
+This is powerful as these shortcuts are exposed as metadata - allowing backoffice to perform system verification.
+
+For example, consider the following video conference system:
+
+```crystal
+
+# It requires at least one camera that can move and be turned on and off
+accessor camera : Array(Camera), implementing: [Powerable, Moveable]
+
+# Optional room blinds that can be opened and closed
+accessor blinds : Array(Blind)?, implementing: [Switchable]
+
+# A single display is required with an optional screen (maybe it's a projector)
+accessor main_display : Display_1, implementing: Powerable
+accessor screen : Screen?
+
+```
+
+Cross system communication is possible if you know the ID of the remote system.
+
+```crystal
+# once you have reference to the remote system you can perform any
+# actions that you might perform on the local system
+sys = system("sys-12345")
+
+sys.name #=> "Name of remote system"
+sys[:Display_2][:power] #=> true
+```
+
+
+### Subscriptions
+
+You can dynamically bind to state of interest in remote modules
+
+```crystal
+
+# subscription is returned and provided with every status update in the callback
+subscription = system.subscribe(:Display_1, :power) do |subscription, new_value|
+ # values are always raw JSON strings
+ JSON.parse(new_value)
+end
+
+# Local subscriptions
+subscription = subscribe(:state) do |subscription, new_value|
+ # values are always raw JSON strings
+ JSON.parse(new_value)
+end
+
+# Clearing all subscriptions
+subscriptions.clear
+
+```
+
+Similarly to subscriptions, there are channels that can be setup for broadcasting
+arbitrary data that might not need be exposed as state.
+
+```crystal
+
+subscription = monitor(:channel_name) do |subscription, new_value|
+ # values are always raw JSON strings
+ JSON.parse(new_value)
+end
+
+# Publish something on the channel to all listeners
+publish(:channel_name, "some event")
+
+```
+
+
+### Scheduler
+
+There is a built in scheduler: https://github.com/spider-gazelle/tasker
+
+```crystal
+
+def connected
+ schedule.every(40.seconds) { poll_device }
+ schedule.in(200.milliseconds) { send_hello }
+end
+
+def disconnected
+ schedule.clear
+end
+
+```
+
+
+### Settings
+
+Settings are stored as JSON and then extracted as required, serialising to the specified type
+There are two types:
+
+* Required settings - raise an error if the setting is unavailable
+* Optional settings - return `nil` if the setting is unavailable
+
+NOTE:: All settings will raise an error if they exist but fail to serialise (as they are not formatted correctly etc)
+
+```crystal
+
+# Required settings
+def on_update
+ @display_id = setting(Int32, :display_id)
+
+ # Can extract deeply nested values
+ # i.e. {input: {list: ["HDMI", "VGA"] }}
+ @primary_input = setting(InputEnum, :input, :list, 0)
+end
+
+# Optional settings (you can optionally provide a default)
+def on_update
+ @display_id = setting?(Int32, :display_id) || 1
+ @primary_input = setting?(InputEnum, :input, :list, 0) || InputEnum::HDMI
+end
+
+```
+
+You can update the local settings of a module, persisting them to the database. Settings must be JSON serialisable
+
+```crystal
+define_setting(:my_setting_name, "some JSON serialisable data")
+```
+
+
+### Logger
+
+There is a logger available: https://crystal-lang.org/api/latest/Logger.html
+
+* Warning and above are written to disk.
+* debug and info are only available when there is an open debugging session.
+
+```crystal
+
+logger.warn { "error unknown response" }
+logger.debug { "function called with #{value}" }
+
+```
+
+The logging format has been pre-configured so all logging from PlaceOS is uniform and simple to parse
+
+
+### Metadata
+
+Metadata is used by various components to simplify configuration.
+
+* `generic_name` => the name that should be used in a system to access the module
+* `descriptive_name` => the manufacturers name for the device
+* `description` => notes or any other descriptive information you wish to add
+* `tcp_port` => TCP port the TCP transport should connect to
+* `udp_port` => UDP port the UDP transport should connect to
+* `uri_base` => The HTTP base for any HTTP requests
+* `default_settings` => Defaults or example settings that should be used to configure a module
+
+
+```crystal
+
+class MyDevice < PlaceOS::Driver
+ generic_name :Driver
+ descriptive_name "Driver model Test"
+ description "This is the driver used for testing"
+ tcp_port 22
+ default_settings({
+ name: "Room 123",
+ username: "steve",
+ password: "$encrypt",
+ complex: {
+ crazy_deep: 1223,
+ },
+ })
+
+ # ...
+
+end
+
+```
+
+
+### Security
+
+By default all public functions are exposed for execution.
+However you can limit who is able to execute sensitive functions.
+
+```crystal
+
+@[Security(Level::Administrator)]
+def perform_task(name : String | Int32)
+ queue &.success("hello #{name}")
+end
+
+```
+
+Use the `Security` annotation to define the access level of the function.
+The options are:
+
+* Administrator `Level::Administrator`
+* Support `Level::Support`
+
+
+### Interfaces
+
+Drivers can expose any methods that make sense for the device, service or logic they encapsulate.
+Across these there are often core sets of similar functionality.
+Interfaces provide a standard way of implementing and interacting with this.
+
+Thier usage is optional, but highly encouraged as it both improves modularity and reduces complexity in driver implementations.
+
+A full list of interfaces is [available in the driver framework](https://github.com/PlaceOS/driver/tree/master/src/placeos-driver/interface). This will expand over time to cover common, repeated patterns as they emerge.
+
+#### Implementing an Interface
+
+Each interface is a module containing abstract methods, types and functionality built from these.
+
+First include the module within the driver body.
+```crystal
+include Interface::Powerable
+```
+You will then need to provide implementations of the abstract methods.
+The compiler will guide you in this.
+
+Some interfaces will also provide default implementation for other methods.
+These may be overridden if the device or service provides a more efficient way to directly execute the desired behaviour.
+To keep compatibility, overridden methods must maintain feature and functional parity with the original implementation.
+
+#### Using an Interface
+
+Drivers that provide an Interface can be discovered using the `system.implementing` method from any logic module.
+This will return a list of all drivers in the system which implement the Interface.
+
+Similarly, the `accessor` macro provides a way to declare a dependency on a sibling driver that provides specific functionality.
+
+For more infomaration on these and for usage examples, see [logic drivers](#logic-drivers).
diff --git a/docs/writing-a-spec.md b/docs/writing-a-spec.md
new file mode 100644
index 00000000000..d470973feb7
--- /dev/null
+++ b/docs/writing-a-spec.md
@@ -0,0 +1,231 @@
+# How to write a spec
+
+There are three kind of drivers
+
+* Streaming IO (TCP, SSH, UDP, Multicast, ect)
+* HTTP Client
+* Logic
+
+From a driver code structure standpoint there is no difference between these types.
+
+* The same driver can be used over a TCP, UDP or SSH transport.
+* All drivers support HTTP methods if a URI endpoint is defined.
+* If a driver is associated with a System then it has access to logic helpers
+
+During a test, the loaded module is loaded with a TCP transport, HTTP enabled and logic module capabilities.
+This allows for testing the full capabilities of any driver.
+
+The driver is launched as it would be in production.
+
+
+## Expectations
+
+Specs have access to Crystal lang spec expectations. This allows you to confirm expectations.
+https://crystal-lang.org/api/latest/Spec/Expectations.html
+
+```crystal
+
+variable = 34
+variable.should eq(34)
+
+```
+
+There is a good overview on how to use expectations here: https://crystal-lang.org/reference/guides/testing.html
+
+
+### Status
+
+Expectations are primarily there to test the state of the module.
+
+* You can access state via the status helper: `status[:state_name]`
+* Then you can check it an expected value: `status[:state_name].should eq(14)`
+
+
+## Testing Streaming IO
+
+The following functions are available for testing streaming IO:
+
+* `transmit(data)` -> transmits the object to the module over the streaming IO interface
+* `responds(data)` -> alias for `transmit`
+* `should_send(data, timeout = 500.milliseconds)` -> expects the module to respond with the data provided
+* `expect_send(timeout = 500.milliseconds)` -> returns the next `Bytes` sent by the module (useful if the data sent is not deterministic, i.e. has a time stamp)
+
+A common test case is to ensure that module state updates as expected after transmitting some data to it:
+
+```crystal
+
+# transmit some data
+transmit(">V:2,C:11,G:2001,B:1,S:1,F:100#")
+
+# check that the state updated as expected
+status[:area2001].should eq(1)
+
+```
+
+
+## Testing HTTP requests
+
+The test suite emulates a HTTP server so you can inspect HTTP requests and send canned responses to the module.
+
+```crystal
+
+expect_http_request do |request, response|
+ io = request.body
+ if io
+ data = io.gets_to_end
+ request = JSON.parse(data)
+ if request["message"] == "hello steve"
+ response.status_code = 202
+ else
+ response.status_code = 401
+ end
+ else
+ raise "expected request to include dialing details #{request.inspect}"
+ end
+end
+
+# check that the state updated as expected
+status[:area2001].should eq(1)
+
+```
+
+Use `expect_http_request` to access an expected request coming from the module.
+
+* when the block completes, the response is sent to the module
+* you can see `request` object details here: https://crystal-lang.org/api/latest/HTTP/Request.html
+* you can see `response` object details here: https://crystal-lang.org/api/latest/HTTP/Server/Response.html
+
+
+## Executing functions
+
+This allows you to request actions be performed in the module via the standard public interface.
+
+* `exec(:function_name, argument_name: argument_value)` -> `response` a response future (async return value)
+* You should send and `responds(data)` before inspecting the `response.get`
+
+```crystal
+
+# Execute a command
+response = exec(:scene?, area: 1)
+
+# Check that the command causes the module to send some data
+should_send("?AREA,1,6\r\n")
+# Respond to that command
+responds("~AREA,1,6,2\r\n")
+
+# Check if the functions return value is expected
+response.get.should eq(2)
+# Check if the module state is correct
+status[:area1].should eq(2)
+
+```
+
+
+## Testing Logic
+
+Logic modules typically expect a system to contain some drivers which the logic modules interacts with.
+
+```crystal
+
+# define mock versions of the drivers it will interact with
+
+class Display < DriverSpecs::MockDriver
+ include Interface::Powerable
+ include Interface::Muteable
+
+ enum Inputs
+ HDMI
+ HDMI2
+ VGA
+ VGA2
+ Miracast
+ DVI
+ DisplayPort
+ HDBaseT
+ Composite
+ end
+
+ include PlaceOS::Driver::Interface::InputSelection(Inputs)
+
+ # Configure initial state in on_load
+ def on_load
+ self[:power] = false
+ self[:input] = Inputs::HDMI
+ end
+
+ # implement the abstract methods required by the interfaces
+ def power(state : Bool)
+ self[:power] = state
+ end
+
+ def switch_to(input : Inputs)
+ mute(false)
+ self[:input] = input
+ end
+
+ def mute(
+ state : Bool = true,
+ index : Int32 | String = 0,
+ layer : MuteLayer = MuteLayer::AudioVideo
+ )
+ self[:mute] = state
+ self[:mute0] = state
+ end
+end
+
+```
+
+Then you can define the system configuration,
+you can also change the system configuration throughout your spec to test different configurations.
+
+```crystal
+
+DriverSpecs.mock_driver "Place::LogicExample" do
+
+ # Where `{Display, Display}` is referencing the `MockDriver` class defined above
+ # and `Display:` is the friendly name
+ # so this system would have `Display_1`, `Display_2`, `Switcher_1`
+ system({
+ Display: {Display, Display},
+ Switcher: {Switcher},
+ })
+
+ # ...
+end
+
+```
+
+Along with the physical system configuration you can test different setting configurations.
+Settings can also be changed throughout the life cycle of your spec.
+
+```crystal
+
+DriverSpecs.mock_driver "Place::LogicExample" do
+
+ settings({
+ name: "Meeting Room 1",
+ map_id: "1.03"
+ })
+
+end
+
+```
+
+An action you perform on your driver might be expected to update state in the mock devices.
+You can access this state via the `system` helper
+
+```crystal
+
+DriverSpecs.mock_driver "Place::LogicExample" do
+
+ # execute a function in your logic module
+ exec(:power, true)
+
+ # Check that the expected state has updated in you mock device
+ system(:Display_1)[:power].should eq(true)
+
+end
+
+```
+
+All status queried in this manner is returned as a `JSON::Any` object
diff --git a/drivers/biamp/nexia.cr b/drivers/biamp/nexia.cr
new file mode 100644
index 00000000000..1cdb6fe696d
--- /dev/null
+++ b/drivers/biamp/nexia.cr
@@ -0,0 +1,209 @@
+module Biamp; end
+
+class Biamp::Nexia < PlaceOS::Driver
+ # Discovery Information
+ tcp_port 23 # Telnet
+ descriptive_name "Biamp Nexia/Audia"
+ generic_name :Mixer
+
+ alias Ids = Array(UInt32) | UInt32
+
+ def on_load
+ # Nexia requires some breathing room
+ queue.wait = false
+ queue.delay = 30.milliseconds
+ transport.tokenizer = Tokenizer.new("\r\n", "\xFF\xFE\x01")
+ end
+
+ def on_update
+ # min -100
+ # max +12
+
+ self["fader_min"] = -36 # specifically for tonsley
+ self["fader_max"] = 12
+ end
+
+ def connected
+ send("\xFF\xFE\x01") # Echo off
+ do_send("GETD", 0, "DEVID")
+
+ schedule.every(60.seconds) do
+ do_send("GETD", 0, "DEVID")
+ end
+ end
+
+ def disconnected
+ schedule.clear
+ end
+
+ def preset(number : UInt32)
+ #
+ # Recall Device 0 Preset number 1001
+ # Device Number will always be 0 for Preset strings
+ # 1001 == minimum preset number
+ #
+ do_send("RECALL", 0, "PRESET", number, name: "preset_#{number}")
+ end
+
+ # {1 => [2,3,5], 2 => [2,3,6]}, true
+ # Supports Standard, Matrix and Automixers
+ def mixer(id : UInt32, inouts : Hash(String, Float32 | Array(Float32)) | Array(Float32), mute : Bool = false, type : String = "matrix")
+ value = mute ? 0 : 1
+
+ if inouts.is_a? Hash
+ req = type == "matrix" ? "MMMUTEXP" : "SMMUTEXP"
+
+ inouts.each_key do |input|
+ outputs = inouts[input]
+ outs = ensure_array(outputs)
+
+ outs.each do |output|
+ do_send("SET", self["device_id"]?, req, id, input, output, value)
+ end
+ end
+ else # assume array (auto-mixer)
+ inouts.each do |input|
+ do_send("SET", self["device_id"]?, "AMMUTEXP", id, input, value)
+ end
+ end
+ end
+
+ FADERS = {
+ "fader" => "FDRLVL",
+ "matrix_in" => "MMLVLIN",
+ "matrix_out" => "MMLVLOUT",
+ "matrix_crosspoint" => "MMLVLXP",
+ "stdmatrix_in" => "SMLVLIN",
+ "stdmatrix_out" => "SMLVLOUT",
+ "auto_in" => "AMLVLIN",
+ "auto_out" => "AMLVLOUT",
+ "io_in" => "INPLVL",
+ "io_out" => "OUTLVL",
+ "FDRLVL" => "fader",
+ "MMLVLIN" => "matrix_in",
+ "MMLVLOUT" => "matrix_out",
+ "MMLVLXP" => "matrix_crosspoint",
+ "SMLVLIN" => "stdmatrix_in",
+ "SMLVLOUT" => "stdmatrix_out",
+ "AMLVLIN" => "auto_in",
+ "AMLVLOUT" => "auto_out",
+ "INPLVL" => "io_in",
+ "OUTLVL" => "io_out",
+ }
+
+ def fader(fader_id : Ids, level : Float32, index : Int32 = 1, type : String = "fader")
+ fad_type = FADERS[type]
+
+ # value range: -100 ~ 12
+ faders = ensure_array(fader_id)
+ faders.each do |fad|
+ do_send("SETD", self["device_id"]?, fad_type, fad, index, level, name: "fader_#{fad}")
+ end
+ end
+
+ def faders(ids : Ids, level : Float32, index : Int32 = 1, type : String = "fader", **args)
+ fader(ids, level, index, type)
+ end
+
+ MUTES = {
+ "fader" => "FDRMUTE",
+ "matrix_in" => "MMMUTEIN",
+ "matrix_out" => "MMMUTEOUT",
+ "auto_in" => "AMMUTEIN",
+ "auto_out" => "AMMUTEOUT",
+ "stdmatrix_in" => "SMMUTEIN",
+ "stdmatrix_out" => "SMOUTMUTE",
+ "io_in" => "INPMUTE",
+ "io_out" => "OUTMUTE",
+ "FDRMUTE" => "fader",
+ "MMMUTEIN" => "matrix_in",
+ "MMMUTEOUT" => "matrix_out",
+ "AMMUTEIN" => "auto_in",
+ "AMMUTEOUT" => "auto_out",
+ "SMMUTEIN" => "stdmatrix_in",
+ "SMOUTMUTE" => "stdmatrix_out",
+ "INPMUTE" => "io_in",
+ "OUTMUTE" => "io_out",
+ }
+
+ def mute(fader_id : Ids, val : Bool = true, index : Int32 = 1, type : String = "fader")
+ actual = val ? 1 : 0
+ mute_type = MUTES[type]
+
+ faders = ensure_array(fader_id)
+ faders.each do |fad|
+ do_send("SETD", self["device_id"]?, mute_type, fad, index, actual, name: "mute_#{fad}")
+ end
+ end
+
+ def mutes(ids : Ids, muted : Bool = true, index : Int32 = 1, type : String = "fader", **args)
+ mute(ids, muted, index, type)
+ end
+
+ def unmute(fader_id : Ids, index : Int32 = 1, type : String = "fader")
+ mute(fader_id, false, index, type)
+ end
+
+ def query_fader(fader_id : Ids, index : Int32 = 1, type : String = "fader")
+ fad = ensure_single(fader_id)
+ fad_type = FADERS[type]
+
+ do_send("GETD", self["device_id"]?, fad_type, fad, index)
+ end
+
+ def query_faders(ids : Ids, index : Int32 = 1, type : String = "fader", **args)
+ query_fader(ids, index, type)
+ end
+
+ def query_mute(fader_id : Ids, index : Int32 = 1, type : String = "fader")
+ fad = ensure_single(fader_id)
+ mute_type = MUTES[type]
+
+ do_send("GETD", self["device_id"]?, mute_type, fad, index)
+ end
+
+ def query_mutes(ids : Ids, index : Int32 = 1, type : String = "fader", **args)
+ query_mute(ids, index, type)
+ end
+
+ def received(data, task)
+ data = String.new(data)
+
+ if data =~ /-ERR/
+ return task.try &.abort
+ else
+ logger.debug { "Nexia responded #{data}" }
+ end
+
+ # --> "#SETD 0 FDRLVL 29 1 0.000000 +OK"
+ data = data.split(" ")
+ unless data[2].nil?
+ resp_type = data[2]
+
+ if resp_type == "DEVID"
+ # "#GETD 0 DEVID 1 "
+ self["device_id"] = data[-1].to_i
+ elsif MUTES.has_key?(resp_type)
+ type = MUTES[resp_type]
+ self["#{type}#{data[3]}_#{data[4]}_mute"] = data[5] == "1"
+ elsif FADERS.has_key?(resp_type)
+ type = FADERS[resp_type]
+ self["#{type}#{data[3]}_#{data[4]}"] = data[5]
+ end
+ end
+
+ task.try &.success
+ end
+
+ private def do_send(*args, **options)
+ send("#{args.join(' ')} \n", **options)
+ end
+
+ private def ensure_array(object)
+ object.is_a?(Array) ? object : [object]
+ end
+
+ private def ensure_single(object)
+ object.is_a?(Array) ? object[0] : object
+ end
+end
diff --git a/drivers/biamp/nexia_spec.cr b/drivers/biamp/nexia_spec.cr
new file mode 100644
index 00000000000..3680469d5ec
--- /dev/null
+++ b/drivers/biamp/nexia_spec.cr
@@ -0,0 +1,52 @@
+DriverSpecs.mock_driver "Biamp::Nexia" do
+ should_send "\xFF\xFE\x01"
+ should_send("GETD 0 DEVID")
+
+ exec(:preset, 1001)
+ should_send("RECALL 0 PRESET 1001")
+
+ exec(:fader, 1, -100)
+ should_send("SETD FDRLVL 1 1 -100.0")
+ responds("SETD FDRLVL 1 1 -100.0 \r\n")
+ status["fader1_1"].should eq("-100.0")
+
+ exec(:faders, 1, -75, 2, "matrix_in")
+ should_send("SETD MMLVLIN 1 2 -75.0")
+ responds("SETD MMLVLIN 1 2 -75.0 \r\n")
+ status["matrix_in1_2"].should eq("-75.0")
+
+ exec(:mute, 1234, false, 3)
+ should_send("SETD FDRMUTE 1234 3 0")
+ responds("SETD FDRMUTE 1234 3 0 \r\n")
+ status["fader1234_3_mute"].should eq(false)
+
+ exec(:mutes, 1234, true, 5, "auto_in")
+ should_send("SETD AMMUTEIN 1234 5 1")
+ responds("SETD AMMUTEIN 1234 5 1 \r\n")
+ status["auto_in1234_5_mute"].should eq(true)
+
+ exec(:unmute, 111)
+ should_send("SETD FDRMUTE 111 1 0")
+ responds("SETD FDRMUTE 111 1 0 \r\n")
+ status["fader111_1_mute"].should eq(false)
+
+ exec(:query_fader, 133)
+ should_send("GETD FDRLVL 133 1 ")
+ responds("GETD FDRLVL 133 1 -100.0 \r\n")
+ status["fader133_1"].should eq("-100.0")
+
+ exec(:query_faders, 144)
+ should_send("GETD FDRLVL 144 1 ")
+ responds("GETD FDRLVL 144 1 -80.0 \r\n")
+ status["fader144_1"].should eq("-80.0")
+
+ exec(:query_mute, 155)
+ should_send("GETD FDRMUTE 155 1 ")
+ responds("GETD FDRMUTE 155 1 0 \r\n")
+ status["fader155_1_mute"].should eq(false)
+
+ exec(:query_mutes, 166)
+ should_send("GETD FDRMUTE 166 1 ")
+ responds("GETD FDRMUTE 166 1 1 \r\n")
+ status["fader166_1_mute"].should eq(true)
+end
diff --git a/drivers/biamp/tesira.cr b/drivers/biamp/tesira.cr
new file mode 100644
index 00000000000..7b36145ef83
--- /dev/null
+++ b/drivers/biamp/tesira.cr
@@ -0,0 +1,220 @@
+require "telnet"
+
+module Biamp; end
+
+class Biamp::Tesira < PlaceOS::Driver
+ # Discovery Information
+ tcp_port 23 # Telnet
+ descriptive_name "Biamp Tesira"
+ generic_name :Mixer
+
+ default_settings({
+ no_password: true,
+ username: "default",
+ password: "default",
+ })
+
+ alias Num = Int32 | Float64
+ alias Ids = String | Array(String)
+
+ def on_load
+ # Nexia requires some breathing room
+ queue.wait = false
+ queue.delay = 30.milliseconds
+ end
+
+ def connected
+ @telnet = telnet = Telnet.new do |telnet_response|
+ transport.send telnet_response
+ end
+ transport.pre_processor { |bytes| telnet.buffer(bytes) }
+
+ if setting(Bool, :no_password)
+ do_send setting(String, :username) || "admin", wait: false, delay: 200.milliseconds, priority: 98
+ do_send setting(String, :password), wait: false, delay: 200.milliseconds, priority: 97
+ end
+ do_send "SESSION set verbose false", priority: 96
+
+ schedule.every(60.seconds) do
+ do_send "DEVICE get serialNumber", priority: 95
+ end
+ end
+
+ def disconnected
+ transport.tokenizer = nil
+ schedule.clear
+ end
+
+ def preset(number_or_name : String | Int32)
+ if number_or_name.is_a? Int32
+ do_send "DEVICE recallPreset #{number_or_name}", priority: 30, name: "preset_#{number_or_name}"
+ else
+ do_send build(:DEVICE, :recallPresetByName, number_or_name), priority: 30, name: "preset_#{number_or_name}"
+ end
+ end
+
+ def start_audio
+ do_send "DEVICE startAudio"
+ end
+
+ def reboot
+ do_send "DEVICE reboot"
+ end
+
+ def get_aliases
+ do_send "SESSION get aliases"
+ end
+
+ MIXERS = {
+ "matrix" => "crosspointLevelState",
+ "mixer" => "crosspoint",
+ }
+
+ def mixer(id : String, inouts : Hash(Int32, Int32 | Array(Int32)) | Array(Int32), mute : Bool = false, type : String = "matrix")
+ mixer_type = MIXERS[type] || type
+
+ if inouts.is_a? Hash
+ inouts.each do |input, outs|
+ outputs = ensure_array(outs)
+ outputs.each do |output|
+ do_send build(id, :set, mixer_type, input, output, mute), priority: 30, name: "mixmute_#{input}_#{output}"
+ end
+ end
+ else # assume array (auto-mixer)
+ inouts.each do |input|
+ do_send build(id, :set, mixer_type, input, mute), priority: 30, name: "mixmute_#{input}"
+ end
+ end
+ end
+
+ FADERS = {
+ "fader" => "level",
+ "matrix_in" => "inputLevel",
+ "matrix_out" => "outputLevel",
+ "matrix_crosspoint" => "crosspointLevel",
+ "level" => "fader",
+ "inputLevel" => "matrix_in",
+ "outputLevel" => "matrix_out",
+ "crosspointLevel" => "matrix_crosspoint",
+ }
+
+ def fader(fader_id : Ids, level : Num | Bool, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ # value range: -100 ~ 12
+ fader_type = FADERS[type] || type
+
+ fader_ids = ensure_array(fader_id)
+ indicies = ensure_array(index)
+ fader_ids.each do |fad|
+ indicies.each do |i|
+ do_send build(fad, :set, fader_type, i, level), priority: 30, name: "fade_#{fad}_#{i}"
+ self["#{fader_type}_#{fad}_#{i}"] = level
+ end
+ end
+ end
+
+ # Named params version
+ def faders(ids : Ids, level : Num | Bool, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ fader(ids, level, index, type)
+ end
+
+ MUTES = {
+ "fader" => "mute",
+ "matrix_in" => "inputMute",
+ "matrix_out" => "outputMute",
+ "mute" => "fader",
+ "inputMute" => "matrix_in",
+ "outputMute" => "matrix_out",
+ }
+
+ def mute(fader_id : Ids, value : Bool = true, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ mute_type = MUTES[type] || type
+
+ fader_ids = ensure_array(fader_id)
+ indicies = ensure_array(index)
+ fader_ids.each do |fad|
+ indicies.each do |i|
+ do_send build(fad, :set, mute_type, i, value), priority: 30, name: "mute_#{fad}_#{i}"
+ self["#{mute_type}_#{fad}_#{i}_mute"] = value
+ end
+ end
+ end
+
+ # Named params version
+ def mutes(ids : Ids, muted : Bool, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ mute(ids, muted, index, type)
+ end
+
+ def unmute(fader_id : Ids, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ mute(fader_id, false, index, type)
+ end
+
+ def query_fader(fader_id : Ids, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ fad_type = FADERS[type] || type
+ fader_id = ensure_array(fader_id)[0]
+ index = ensure_array(index)[0]
+
+ do_send build(fader_id, :get, fad_type, index)
+ end
+
+ # Named params version
+ def query_faders(ids : Ids, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ query_fader(ids, index, type)
+ end
+
+ def query_mute(fader_id : Ids, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ mute_type = MUTES[type] || type
+ fader_id = ensure_array(fader_id)[0]
+ index = ensure_array(index)[0]
+
+ do_send build(fader_id, :get, mute_type, index)
+ end
+
+ # Named params version
+ def query_mutes(ids : Ids, index : Int32 | Array(Int32) = 1, type : String = "fader")
+ query_mute(ids, index, type)
+ end
+
+ def received(data, task)
+ data = String.new(data).strip
+
+ logger.debug { "Tesira responded -> data: #{data}" }
+ result = data.split(" ")
+
+ if result[0] == "-"
+ task.try(&.abort)
+ end
+
+ if data =~ /login:|server/i
+ transport.tokenizer = Tokenizer.new "\r\n"
+ end
+
+ task.try(&.success)
+ end
+
+ private def build(*args)
+ cmd = ""
+ args.each do |arg|
+ data = arg.to_s
+ next if data.blank?
+ cmd = cmd + " " if cmd.size > 0
+
+ if data.includes? " "
+ cmd = cmd + "\""
+ cmd = cmd + data
+ cmd = cmd + "\""
+ else
+ cmd = cmd + data
+ end
+ end
+ cmd
+ end
+
+ private def do_send(command, **options)
+ logger.debug { "requesting #{command}" }
+ send @telnet.not_nil!.prepare(command), **options
+ end
+
+ private def ensure_array(object)
+ object.is_a?(Array) ? object : [object]
+ end
+end
diff --git a/drivers/biamp/tesira_spec.cr b/drivers/biamp/tesira_spec.cr
new file mode 100644
index 00000000000..c12c7362d02
--- /dev/null
+++ b/drivers/biamp/tesira_spec.cr
@@ -0,0 +1,37 @@
+DriverSpecs.mock_driver "Biamp::Tesira" do
+ transmit "login: "
+ should_send "default\r\n"
+ should_send "default\r\n"
+ should_send "SESSION set verbose false\r\n"
+
+ exec(:preset, 1001)
+ should_send "DEVICE recallPreset 1001"
+
+ exec(:preset, "1001-test")
+ should_send "DEVICE recallPresetByName 1001-test"
+
+ exec(:start_audio)
+ should_send "DEVICE startAudio"
+
+ exec(:reboot)
+ should_send "DEVICE reboot"
+
+ exec(:get_aliases)
+ should_send "SESSION get aliases"
+
+ exec(:mixer, "123", [1])
+ should_send "123 set crosspointLevelState 1 false"
+
+ exec(:fader, "Fader123", 11)
+ should_send "Fader123 set level 1 11"
+ responds("+OK\r\n")
+ status["level_Fader123_1"] = 11
+
+ exec(:mute, "Fader123")
+ should_send "Fader123 set mute 1 true"
+ responds("+OK\r\n")
+ status["level_Fader123_1_mute"] = true
+
+ exec(:query_fader, "Fader123")
+ should_send "Fader123 get level 1"
+end
diff --git a/drivers/bose/control_space_serial.cr b/drivers/bose/control_space_serial.cr
new file mode 100644
index 00000000000..a2721f7b0b4
--- /dev/null
+++ b/drivers/bose/control_space_serial.cr
@@ -0,0 +1,58 @@
+module Bose; end
+
+# Documentation: https://aca.im/driver_docs/Bose/Bose-ControlSpace-SerialProtocol-v5.pdf
+
+class Bose::ControlSpaceSerial < PlaceOS::Driver
+ # Discovery Information
+ tcp_port 10055
+ descriptive_name "Bose ControlSpace Serial Protocol"
+ generic_name :Mixer
+
+ def on_load
+ # 0x0D ( carriage return \r)
+ transport.tokenizer = Tokenizer.new(Bytes[0x0D])
+ on_update
+ end
+
+ def on_update
+ end
+
+ def connected
+ schedule.every(60.seconds) do
+ logger.debug { "-- maintaining connection" }
+ do_send "GS", priority: 99
+ end
+ end
+
+ def disconnected
+ schedule.clear
+ end
+
+ private def do_send(data, **options)
+ logger.debug { "requesting: #{data}" }
+ send "#{data}\x0D", **options
+ end
+
+ def set_parameter_group(id : UInt8)
+ do_send("SS #{id.to_s(16).upcase}", wait: false, name: "set_pgroup").get
+ self[:parameter_group] = id
+ end
+
+ def get_parameter_group
+ do_send "GS"
+ end
+
+ def received(data, task)
+ # Ignore the framing bytes
+ data = String.new(data).rchop
+ logger.debug { "ControlSpace sent: #{data}" }
+
+ parts = data.split(" ")
+ case parts[0]
+ when "S"
+ self[:parameter_group] = parts[1].to_i(16)
+ end
+
+ task.try &.success
+ end
+end
diff --git a/drivers/bose/control_space_serial_spec.cr b/drivers/bose/control_space_serial_spec.cr
new file mode 100644
index 00000000000..f5685cafbd0
--- /dev/null
+++ b/drivers/bose/control_space_serial_spec.cr
@@ -0,0 +1,10 @@
+DriverSpecs.mock_driver "Bose::ControlSpaceSerial" do
+ exec(:set_parameter_group, 12)
+ should_send("SS C\r")
+ status[:parameter_group].should eq(12)
+
+ exec(:get_parameter_group)
+ should_send("GS\r")
+ responds("S FF\r")
+ status[:parameter_group].should eq(255)
+end
diff --git a/drivers/cisco/dna_spaces.cr b/drivers/cisco/dna_spaces.cr
new file mode 100644
index 00000000000..c5d221038d1
--- /dev/null
+++ b/drivers/cisco/dna_spaces.cr
@@ -0,0 +1,631 @@
+module Cisco; end
+
+require "set"
+require "jwt"
+require "s2_cells"
+require "simple_retry"
+require "placeos-driver/interface/locatable"
+
+class Cisco::DNASpaces < PlaceOS::Driver
+ include Interface::Locatable
+
+ # Discovery Information
+ descriptive_name "Cisco DNA Spaces"
+ generic_name :DNA_Spaces
+ uri_base "https://partners.dnaspaces.io"
+
+ default_settings({
+ dna_spaces_activation_key: "provide this and the API / tenant ids will be generated automatically",
+ dna_spaces_api_key: "X-API-KEY",
+ tenant_id: "sfdsfsdgg",
+
+ # Time before a user location is considered probably too old (in minutes)
+ max_location_age: 10,
+
+ floorplan_mappings: {
+ location_a4cb0: {
+ "level_name" => "optional name",
+ "building" => "zone-GAsXV0nc",
+ "level" => "zone-GAsmleH",
+ "offset_x" => 12.4,
+ "offset_y" => 5.2,
+ "map_width" => 50.3,
+ "map_height" => 100.9,
+ },
+ },
+
+ debug_stream: false,
+ })
+
+ @streaming = false
+ @last_received = 0_i64
+ @stream_active = false
+
+ def on_load
+ on_update
+ if !@api_key.empty?
+ @streaming = true
+ spawn(same_thread: true) { start_streaming_events }
+ end
+ end
+
+ def on_unload
+ @terminated = true
+ @channel.close
+ @stream_active = false
+ update_monitoring_status(running: false)
+ end
+
+ @activation_token : String = ""
+ @api_key : String = ""
+ @tenant_id : String = ""
+ @terminated : Bool = false
+ @channel : Channel(String) = Channel(String).new
+ @max_location_age : Time::Span = 10.minutes
+ @s2_level : Int32 = 21
+ @floorplan_mappings : Hash(String, Hash(String, String | Float64)) = Hash(String, Hash(String, String | Float64)).new
+ @debug_stream : Bool = false
+ @events_received : UInt64 = 0_u64
+
+ def on_update
+ @max_location_age = (setting?(UInt32, :max_location_age) || 10).minutes
+ @s2_level = setting?(Int32, :s2_level) || 21
+ @floorplan_mappings = setting?(Hash(String, Hash(String, String | Float64)), :floorplan_mappings) || @floorplan_mappings
+ @debug_stream = setting?(Bool, :debug_stream) || false
+
+ schedule.clear
+ schedule.every(30.minutes) { cleanup_caches }
+ schedule.every(5.minutes) { update_monitoring_status }
+ schedule.in(5.seconds) { update_monitoring_status }
+
+ @activation_token = setting?(String, :dna_spaces_activation_key) || ""
+ if @activation_token.empty?
+ @api_key = setting(String, :dna_spaces_api_key)
+ @tenant_id = setting(String, :tenant_id)
+ else
+ @api_key = setting?(String, :dna_spaces_api_key) || ""
+ @tenant_id = setting?(String, :tenant_id) || ""
+
+ # Activate the API key using the activation_token
+ schedule.in(5.seconds) { activate } if @api_key.empty?
+ end
+
+ if !@streaming && !@api_key.empty?
+ @streaming = true
+ spawn(same_thread: true) { start_streaming_events }
+ end
+ end
+
+ @[Security(Level::Support)]
+ def activate
+ return if @activation_token.empty?
+
+ response = get("/client/v1/partner/partnerPublicKey/")
+ raise "failed to obtain partner public key, code #{response.status_code}" unless response.success?
+
+ logger.debug { "public key requested: #{response.body}" }
+
+ payload = NamedTuple(
+ status: Bool,
+ message: String,
+ data: Array(ActivactionPublicKey)).from_json(response.body.not_nil!)
+
+ raise "unexpected failure obtaining partner public key: #{payload[:message]}" unless payload[:status]
+
+ public_key = payload[:data][0].public_key
+ payload, header = JWT.decode(@activation_token, public_key, JWT::Algorithm::RS256)
+ app_id = payload["appId"].as_s
+ ref_id = payload["activationRefId"].as_s
+ tenant_id = payload["tenantId"].as_i64.to_s
+
+ response = post("/client/v1/partner/activateOnPremiseApp", headers: {
+ "Content-Type" => "application/json",
+ "Authorization" => "Bearer #{@activation_token}",
+ }, body: {
+ appId: app_id,
+ activationRefId: ref_id,
+ }.to_json)
+ raise "failed to obtain API key, code #{response.status_code}\n#{response.body}" unless response.success?
+
+ logger.debug { "application activated: #{response.body}" }
+
+ payload = NamedTuple(
+ status: Bool,
+ message: String,
+ data: NamedTuple(apiKey: String)).from_json(response.body.not_nil!)
+
+ raise "unexpected failure obtaining API key: #{payload[:message]}" unless payload[:status]
+
+ api_key = payload[:data][:apiKey]
+ logger.debug { "saving API key: #{tenant_id}, #{api_key}" }
+
+ define_setting(:tenant_id, tenant_id)
+ define_setting(:dna_spaces_api_key, api_key)
+ define_setting(:dna_spaces_activation_key, "")
+
+ logger.debug { "settings saved! Starting stream" }
+ @api_key = api_key
+ @tenant_id = tenant_id
+ if !@streaming
+ @streaming = true
+ spawn(same_thread: true) { start_streaming_events }
+ end
+ end
+
+ class LocationInfo
+ include JSON::Serializable
+
+ getter location : Location
+
+ @[JSON::Field(key: "locationDetails")]
+ getter details : LocationDetails
+ end
+
+ def get_location_info(location_id : String)
+ response = get("/api/partners/v1/locations/#{location_id}?partnerTenantId=#{@tenant_id}", headers: {
+ "X-API-KEY" => @api_key,
+ })
+
+ raise "failed to obtain location id #{location_id}, code #{response.status_code}" unless response.success?
+ LocationInfo.from_json(response.body.not_nil!)
+ end
+
+ @description_lock : Mutex = Mutex.new
+ @location_descriptions : Hash(String, String) = {} of String => String
+
+ def seen_locations
+ @description_lock.synchronize { @location_descriptions.dup }
+ end
+
+ # MAC Address => Location (including user)
+ @locations : Hash(String, DeviceLocationUpdate) = {} of String => DeviceLocationUpdate
+ @loc_lock : Mutex = Mutex.new
+
+ def locations
+ @loc_lock.synchronize { yield @locations }
+ end
+
+ @user_lookup : Hash(String, Set(String)) = {} of String => Set(String)
+ @user_loc : Mutex = Mutex.new
+
+ def user_lookup
+ @user_loc.synchronize { yield @user_lookup }
+ end
+
+ def user_lookup(user_id : String)
+ formatted_user = format_username(user_id)
+ user_lookup { |lookup| lookup[formatted_user]? }
+ end
+
+ def locate_mac(address : String)
+ formatted_address = format_mac(address)
+ locations { |locs| locs[formatted_address]? }
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def inspect_state
+ logger.debug {
+ "MAC Locations: #{locations &.keys}"
+ }
+ {tracking: locations &.size, events_received: @events_received}
+ end
+
+ @map_details : Hash(String, Dimension) = {} of String => Dimension
+ @map_lock : Mutex = Mutex.new
+
+ def get_map_details(map_id : String)
+ map = @map_lock.synchronize { @map_details[map_id]? }
+ if !map
+ response = get("/api/partners/v1/maps/#{map_id}?partnerTenantId=#{@tenant_id}", headers: {
+ "X-API-KEY" => @api_key,
+ })
+ if !response.success?
+ message = "failed to obtain map id #{map_id}, code #{response.status_code}"
+ logger.warn { message }
+ return nil
+ end
+ map = MapInfo.from_json(response.body.not_nil!).dimension
+ @map_lock.synchronize { @map_details[map_id] = map }
+ end
+ map
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def cleanup_caches : Nil
+ logger.debug { "removing location data that is over 30 minutes old" }
+
+ old = 30.minutes.ago.to_unix
+ remove_keys = [] of String
+ locations do |locs|
+ locs.each { |mac, location| remove_keys << mac if location.last_seen < old }
+ remove_keys.each { |mac| locs.delete(mac) }
+ end
+
+ logger.debug { "removed #{remove_keys.size} MACs" }
+ nil
+ end
+
+ # we want to stream events until driver is terminated
+ protected def start_streaming_events
+ @streaming = true
+ SimpleRetry.try_to(
+ base_interval: 10.milliseconds,
+ max_interval: 5.seconds
+ ) { stream_events unless @terminated }
+ ensure
+ @streaming = false
+ end
+
+ # as sometimes the map id is missing, but in the same location
+ # location id => map id
+ @location_id_maps = {} of String => String
+
+ # Processes events as they come in, forces a disconnect if no events are sent
+ # for a period of time as the remote should be sending them periodically
+ protected def process_events(client)
+ loop do
+ select
+ when data = @channel.receive
+ logger.debug { "received push #{data}" } if @debug_stream
+ @events_received = @events_received &+ 1_u64
+ begin
+ event = Cisco::DNASpaces::Events.from_json(data)
+ payload = event.payload
+ case payload
+ when DeviceExit
+ device_mac = format_mac(payload.device.mac_address)
+ locations &.delete(device_mac)
+ when DeviceEntry
+ # This is used entirely for
+ @description_lock.synchronize { payload.location.descriptions(@location_descriptions) }
+ when DeviceLocationUpdate
+ # Keep track of device location
+ device_mac = format_mac(payload.device.mac_address)
+ existing = nil
+
+ # ignore locations where we don't have enough details to put the device on a map
+ if payload.map_id.presence
+ @location_id_maps[payload.location.location_id] = payload.map_id
+ else
+ found = false
+ payload.location_mappings.values.each do |loc_id|
+ if map_id = @location_id_maps[loc_id]?
+ payload.map_id = map_id
+ found = true
+ break
+ end
+ end
+
+ if !found
+ logger.debug { "ignoring device #{device_mac} location as map_id is empty, location id #{payload.location.location_id}, visit #{payload.visit_id}" }
+ next
+ end
+ end
+
+ payload.last_seen = payload.last_seen // 1000
+
+ locations do |loc|
+ existing = loc[device_mac]?
+ loc[device_mac] = payload
+ end
+
+ # Maintain user lookup
+ if payload.raw_user_id.presence
+ user_id = format_username(payload.raw_user_id)
+
+ if existing && payload.raw_user_id != existing.raw_user_id
+ old_user_id = format_username(existing.raw_user_id)
+
+ user_lookup do |lookup|
+ lookup[old_user_id]?.try &.delete(device_mac)
+ devices = lookup[old_user_id]? || Set(String).new
+ devices.delete(device_mac)
+ lookup.delete(old_user_id) if devices.empty?
+
+ devices = lookup[user_id]? || Set(String).new
+ devices << device_mac
+ lookup[user_id] = devices
+ end
+ else
+ user_lookup do |lookup|
+ devices = lookup[user_id]? || Set(String).new
+ devices << device_mac
+ lookup[user_id] = devices
+ end
+ end
+ end
+
+ # payload.location_mappings => { "ZONE" => loc_id, "FLOOR" => loc_id, "BUILDING" => loc_id, "CAMPUS" => loc_id }
+ else
+ logger.debug { "ignoring event: #{payload ? payload.class : event.class}" }
+ end
+ rescue error
+ logger.error(exception: error) { "parsing DNA Spaces event: #{data}" }
+ end
+ when timeout(20.seconds)
+ logger.debug { "no events received for 20 seconds, expected heartbeat at 15 seconds" }
+ @channel.close
+ break
+ end
+ end
+ ensure
+ client.close
+ end
+
+ protected def stream_events
+ client = HTTP::Client.new URI.parse(config.uri.not_nil!)
+ client.get("/api/partners/v1/firehose/events", HTTP::Headers{
+ "X-API-KEY" => @api_key,
+ }) do |response|
+ if !response.success?
+ @stream_active = false
+ logger.warn { "failed to connect to firehose api #{response.status_code}" }
+ raise "failed to connect to firehose api #{response.status_code}"
+ end
+
+ @stream_active = true
+
+ # We use a channel for event processing so we can make use of timeouts
+ @channel = Channel(String).new
+ spawn(same_thread: true) { process_events(client) }
+
+ begin
+ loop do
+ if response.body_io.closed?
+ @channel.close
+ break
+ end
+
+ if data = response.body_io.gets
+ @last_received = Time.utc.to_unix_ms
+ @channel.send data
+ else
+ @channel.close
+ break
+ end
+ end
+ rescue IO::Error
+ @channel.close
+ end
+ end
+
+ # Trigger the retry behaviour
+ @stream_active = false
+ raise "stream closed"
+ end
+
+ # =============================
+ # Locatable interface
+ # =============================
+ def locate_user(email : String? = nil, username : String? = nil)
+ if macs = user_lookup(username.presence || email.presence.not_nil!)
+ location_max_age = @max_location_age.ago.to_unix
+
+ macs.compact_map { |mac|
+ if location = locate_mac(mac)
+ if location.last_seen > location_max_age
+ # we update the mac_address to a formatted version
+ location.device.mac_address = mac
+ location
+ end
+ end
+ }.sort { |a, b|
+ b.last_seen <=> a.last_seen
+ }.map { |location|
+ lat = location.latitude
+ lon = location.longitude
+
+ loc = {
+ "location" => "wireless",
+ "coordinates_from" => "top-left",
+ "x" => location.x_pos,
+ "y" => location.y_pos,
+ "lon" => lon,
+ "lat" => lat,
+ "s2_cell_id" => S2Cells::LatLon.new(lat, lon).to_token(@s2_level),
+ "mac" => location.device.mac_address,
+ "variance" => location.unc,
+ "last_seen" => location.last_seen,
+ "dna_floor_id" => location.map_id,
+ "ssid" => location.ssid,
+ "manufacturer" => location.device.manufacturer,
+ "os" => location.device.os,
+ }
+
+ map_width = 0.0
+ map_height = 0.0
+ offset_x = 0.0
+ offset_y = 0.0
+
+ # Add our zone IDs to the response
+ location.location_mappings.each do |tag, location_id|
+ if level_data = @floorplan_mappings[location_id]?
+ level_data.each do |key, value|
+ case key
+ when "offset_x"
+ offset_x = value.as(Float64)
+ loc["x"] = location.x_pos - offset_x
+ when "offset_y"
+ offset_y = value.as(Float64)
+ loc["y"] = location.y_pos - offset_y
+ when "map_width"
+ map_width = value.as(Float64)
+ when "map_height"
+ map_height = value.as(Float64)
+ else
+ loc[key] = value
+ end
+ end
+ break
+ end
+ end
+
+ # Add map information to the response
+ if map_width > 0.0 && map_height > 0.0
+ loc["map_width"] = map_width
+ loc["map_height"] = map_height
+ elsif map_size = get_map_details(location.map_id)
+ loc["map_width"] = map_width > 0.0 ? map_width : (map_size.length - offset_x)
+ loc["map_height"] = map_height > 0.0 ? map_height : (map_size.width - offset_y)
+ end
+
+ loc
+ }
+ else
+ [] of Nil
+ end
+ end
+
+ # Will return an array of MAC address strings
+ # lowercase with no seperation characters abcdeffd1234 etc
+ def macs_assigned_to(email : String? = nil, username : String? = nil) : Array(String)
+ user_lookup(username.presence || email.presence.not_nil!).try(&.to_a) || [] of String
+ end
+
+ # Will return `nil` or `{"location": "wireless", "assigned_to": "bob123", "mac_address": "abcd"}`
+ def check_ownership_of(mac_address : String) : OwnershipMAC?
+ if location = locate_mac(mac_address)
+ {
+ location: "wireless",
+ assigned_to: format_username(location.raw_user_id),
+ mac_address: format_mac(mac_address),
+ }
+ end
+ end
+
+ # Will return an array of devices and their x, y coordinates
+ def device_locations(zone_id : String, location : String? = nil)
+ logger.debug { "looking up device locations in #{zone_id}" }
+ return [] of Nil if location.presence && location != "wireless"
+
+ # Find the floors associated with the provided zone id
+ floors = [] of String
+ adjustments = {} of String => Tuple(Float64, Float64, Float64, Float64)
+ @floorplan_mappings.each do |floor_id, data|
+ if data.values.includes?(zone_id)
+ floors << floor_id
+ offset_x = (data["offset_x"]? || 0.0).as(Float64)
+ offset_y = (data["offset_y"]? || 0.0).as(Float64)
+ map_width = (data["map_width"]? || -1.0).as(Float64)
+ map_height = (data["map_height"]? || -1.0).as(Float64)
+ adjustments[floor_id] = {offset_x, offset_y, map_width, map_height}
+ end
+ end
+ logger.debug { "found matching meraki floors: #{floors}" }
+ return [] of Nil if floors.empty?
+
+ checking_count = @locations.size
+ wrong_floor = 0
+ too_old = 0
+
+ # Find the devices that are on the matching floors
+ oldest_location = @max_location_age.ago.to_unix
+
+ matching = locations(&.compact_map { |mac, loc|
+ if loc.last_seen < oldest_location
+ too_old += 1
+ next
+ end
+ if (floors & loc.location_mappings.values).empty?
+ wrong_floor += 1
+ next
+ end
+
+ # ensure the formatted mac is being used
+ loc.device.mac_address = mac
+ loc
+ })
+
+ logger.debug { "found #{matching.size} matching devices\nchecked #{checking_count} locations, #{wrong_floor} were on the wrong floor, #{too_old} were too old" }
+
+ matching.group_by(&.map_id).flat_map { |map_id, locations|
+ map_width = -1.0
+ map_height = -1.0
+ offset_x = 0.0
+ offset_y = 0.0
+
+ # any adjustments required for these locations?
+ locations.first.location_mappings.each do |tag, location_id|
+ if level_data = adjustments[location_id]?
+ offset_x, offset_y, map_width, map_height = level_data
+ break
+ end
+ end
+
+ if map_width == -1.0 || map_height == -1.0
+ if map_size = get_map_details(map_id)
+ map_width = map_width > -1.0 ? map_width : (map_size.length - offset_x)
+ map_height = map_height > -1.0 ? map_height : (map_size.width - offset_y)
+ end
+ end
+
+ locations.map do |loc|
+ lat = loc.latitude
+ lon = loc.longitude
+
+ {
+ location: :wireless,
+ coordinates_from: "top-left",
+ x: loc.x_pos - offset_x,
+ y: loc.y_pos - offset_y,
+ lon: lon,
+ lat: lat,
+ s2_cell_id: S2Cells::LatLon.new(lat, lon).to_token(@s2_level),
+ mac: loc.device.mac_address,
+ variance: loc.unc,
+ last_seen: loc.last_seen,
+ map_width: map_width,
+ map_height: map_height,
+ ssid: loc.ssid,
+ manufacturer: loc.device.manufacturer,
+ os: loc.device.os,
+ }
+ end
+ }
+ end
+
+ def format_mac(address : String)
+ address.gsub(/(0x|[^0-9A-Fa-f])*/, "").downcase
+ end
+
+ def format_username(user : String)
+ if user.includes? "@"
+ user = user.split("@")[0]
+ elsif user.includes? "\\"
+ user = user.split("\\")[1]
+ end
+ user.downcase
+ end
+
+ # This provides the DNA Spaces dashboard with stream consumption status
+ @[Security(PlaceOS::Driver::Level::Administrator)]
+ def update_monitoring_status(running : Bool = true) : Nil
+ response = put("/api/partners/v1/monitoring/status", headers: {
+ "Content-Type" => "application/json",
+ "X-API-KEY" => @api_key,
+ }, body: {
+ data: {
+ overallStatus: {
+ status: running ? "up" : "down",
+ notices: [] of Nil,
+ },
+ instanceDetails: {
+ ipAddress: "",
+ instanceId: module_id,
+ },
+ cloudFirehose: {
+ status: @stream_active ? "connected" : "disconnected",
+ lastReceived: @last_received,
+ },
+ localFirehose: {
+ status: "disconnected",
+ lastReceived: 0,
+ },
+ subsystems: [] of Nil,
+ },
+ }.to_json)
+ raise "failed to update status, code #{response.status_code}\n#{response.body}" unless response.success?
+ end
+end
+
+require "./dna_spaces/events"
diff --git a/drivers/cisco/dna_spaces/activation_publickey.cr b/drivers/cisco/dna_spaces/activation_publickey.cr
new file mode 100644
index 00000000000..eb30ec06dd3
--- /dev/null
+++ b/drivers/cisco/dna_spaces/activation_publickey.cr
@@ -0,0 +1,14 @@
+require "./events"
+
+class Cisco::DNASpaces::ActivactionPublicKey
+ include JSON::Serializable
+
+ getter version : String
+
+ @[JSON::Field(key: "publicKey")]
+ getter public_key : String
+
+ def public_key
+ "-----BEGIN PUBLIC KEY-----\n#{@public_key}\n-----END PUBLIC KEY-----\n"
+ end
+end
diff --git a/drivers/cisco/dna_spaces/app_activaction.cr b/drivers/cisco/dna_spaces/app_activaction.cr
new file mode 100644
index 00000000000..36202b21424
--- /dev/null
+++ b/drivers/cisco/dna_spaces/app_activaction.cr
@@ -0,0 +1,21 @@
+require "./events"
+
+class Cisco::DNASpaces::AppActivaction
+ include JSON::Serializable
+
+ @[JSON::Field(key: "spacesTenantName")]
+ getter spaces_tenant_name : String
+
+ @[JSON::Field(key: "spacesTenantId")]
+ getter spaces_tenant_id : String
+
+ @[JSON::Field(key: "partnerTenantId")]
+ getter partner_tenant_id : String
+ getter name : String
+
+ @[JSON::Field(key: "referenceId")]
+ getter reference_id : String
+
+ @[JSON::Field(key: "instanceName")]
+ getter instance_name : String
+end
diff --git a/drivers/cisco/dna_spaces/device.cr b/drivers/cisco/dna_spaces/device.cr
new file mode 100644
index 00000000000..7f6c09daaed
--- /dev/null
+++ b/drivers/cisco/dna_spaces/device.cr
@@ -0,0 +1,39 @@
+require "./events"
+
+class Cisco::DNASpaces::Device
+ include JSON::Serializable
+
+ @[JSON::Field(key: "deviceId")]
+ getter device_id : String
+
+ @[JSON::Field(key: "userId")]
+ getter user_id : String
+
+ getter tags : Array(String)
+ getter mobile : String
+ getter email : String
+ getter gender : String
+
+ @[JSON::Field(key: "firstName")]
+ getter first_name : String
+
+ @[JSON::Field(key: "lastName")]
+ getter last_name : String
+
+ @[JSON::Field(key: "postalCode")]
+ getter postal_code : String
+
+ # optIns
+ # otherFields
+ # socialNetworkInfo
+
+ # We make this editable so we can store the formatted version here
+ @[JSON::Field(key: "macAddress")]
+ property mac_address : String
+ getter manufacturer : String
+ getter os : String
+
+ @[JSON::Field(key: "osVersion")]
+ getter os_version : String
+ getter type : String
+end
diff --git a/drivers/cisco/dna_spaces/device_count.cr b/drivers/cisco/dna_spaces/device_count.cr
new file mode 100644
index 00000000000..c799d9b8286
--- /dev/null
+++ b/drivers/cisco/dna_spaces/device_count.cr
@@ -0,0 +1,22 @@
+require "./events"
+
+class Cisco::DNASpaces::DeviceCount
+ include JSON::Serializable
+
+ getter location : Location
+
+ @[JSON::Field(key: "associatedCount")]
+ getter associated_count : Int32
+
+ @[JSON::Field(key: "estimatedProbingCount")]
+ getter estimated_probing_count : Int32
+
+ @[JSON::Field(key: "probingRandomizedPercentage")]
+ getter probing_randomized_percentage : Float64
+
+ @[JSON::Field(key: "estimatedDensity")]
+ getter estimated_density : Float64
+
+ @[JSON::Field(key: "estimatedCapacityPercentage")]
+ getter estimated_capacity_percentage : Float64
+end
diff --git a/drivers/cisco/dna_spaces/device_entry.cr b/drivers/cisco/dna_spaces/device_entry.cr
new file mode 100644
index 00000000000..befd6dd137e
--- /dev/null
+++ b/drivers/cisco/dna_spaces/device_entry.cr
@@ -0,0 +1,26 @@
+require "./events"
+
+class Cisco::DNASpaces::DeviceEntry
+ include JSON::Serializable
+
+ getter device : Device
+ getter location : Location
+
+ @[JSON::Field(key: "visitId")]
+ getter visit_id : String
+
+ @[JSON::Field(key: "entryTimestamp")]
+ getter entry_timestamp : Int64
+
+ @[JSON::Field(key: "entryDateTime")]
+ getter entry_datetime : String
+
+ @[JSON::Field(key: "timeZone")]
+ getter time_zone : String
+
+ @[JSON::Field(key: "deviceClassification")]
+ getter device_classification : String
+
+ @[JSON::Field(key: "daysSinceLastVisit")]
+ getter days_sinc_last_visit : Int32
+end
diff --git a/drivers/cisco/dna_spaces/device_exit.cr b/drivers/cisco/dna_spaces/device_exit.cr
new file mode 100644
index 00000000000..151ef54cd5f
--- /dev/null
+++ b/drivers/cisco/dna_spaces/device_exit.cr
@@ -0,0 +1,38 @@
+require "./events"
+
+class Cisco::DNASpaces::DeviceExit
+ include JSON::Serializable
+
+ getter device : Device
+ getter location : Location
+
+ @[JSON::Field(key: "visitId")]
+ getter visit_id : String
+
+ @[JSON::Field(key: "visitDurationMinutes")]
+ getter visit_duration_minutes : Int32
+
+ @[JSON::Field(key: "visitDurationMinutes")]
+ getter visit_duration_minutes : Int32
+
+ @[JSON::Field(key: "entryTimestamp")]
+ getter entry_timestamp : Int64
+
+ @[JSON::Field(key: "entryDateTime")]
+ getter entry_datetime : String
+
+ @[JSON::Field(key: "exitTimestamp")]
+ getter exit_timestamp : Int64
+
+ @[JSON::Field(key: "exitDateTime")]
+ getter exit_datetime : String
+
+ @[JSON::Field(key: "timeZone")]
+ getter time_zone : String
+
+ @[JSON::Field(key: "deviceClassification")]
+ getter device_classification : String
+
+ @[JSON::Field(key: "visitClassification")]
+ getter visit_classification : String
+end
diff --git a/drivers/cisco/dna_spaces/device_location_update.cr b/drivers/cisco/dna_spaces/device_location_update.cr
new file mode 100644
index 00000000000..393017be928
--- /dev/null
+++ b/drivers/cisco/dna_spaces/device_location_update.cr
@@ -0,0 +1,51 @@
+require "./events"
+
+class Cisco::DNASpaces::DeviceLocationUpdate
+ include JSON::Serializable
+
+ getter device : Device
+ getter location : Location
+
+ getter ssid : String
+
+ @[JSON::Field(key: "rawUserId")]
+ getter raw_user_id : String
+
+ @[JSON::Field(key: "visitId")]
+ getter visit_id : String
+
+ @[JSON::Field(key: "lastSeen")]
+ property last_seen : Int64
+
+ @[JSON::Field(key: "deviceClassification")]
+ getter device_classification : String
+
+ @[JSON::Field(key: "mapId")]
+ property map_id : String
+
+ @[JSON::Field(key: "xPos")]
+ getter x_pos : Float64
+
+ @[JSON::Field(key: "yPos")]
+ getter y_pos : Float64
+
+ @[JSON::Field(key: "confidenceFactor")]
+ getter confidence_factor : Float64
+ getter latitude : Float64
+ getter longitude : Float64
+ getter unc : Float64
+
+ @[JSON::Field(ignore: true)]
+ @location_mappings : Hash(String, String)? = nil
+
+ # Ensure we only process these once
+ def location_mappings : Hash(String, String)
+ if mappings = @location_mappings
+ mappings
+ else
+ mappings = location.details
+ @location_mappings = mappings
+ mappings
+ end
+ end
+end
diff --git a/drivers/cisco/dna_spaces/device_presence.cr b/drivers/cisco/dna_spaces/device_presence.cr
new file mode 100644
index 00000000000..2c3973a4035
--- /dev/null
+++ b/drivers/cisco/dna_spaces/device_presence.cr
@@ -0,0 +1,54 @@
+require "./events"
+
+class Cisco::DNASpaces::DevicePresence
+ include JSON::Serializable
+
+ @[JSON::Field(key: "presenceEventType")]
+ getter presence_event_type : String
+
+ @[JSON::Field(key: "wasInActive")]
+ getter was_in_active : Bool
+ getter device : Device
+ getter location : Location
+
+ getter ssid : String
+
+ @[JSON::Field(key: "rawUserId")]
+ getter raw_user_id : String
+
+ @[JSON::Field(key: "visitId")]
+ getter visit_id : String
+
+ @[JSON::Field(key: "daysSinceLastVisit")]
+ getter days_since_last_visit : Int32
+
+ @[JSON::Field(key: "entryTimestamp")]
+ getter entry_timestamp : Int64
+
+ @[JSON::Field(key: "entryDateTime")]
+ getter entry_datetime : String
+
+ @[JSON::Field(key: "exitTimestamp")]
+ getter exit_timestamp : Int64
+
+ @[JSON::Field(key: "exitDateTime")]
+ getter exit_date_time : String
+
+ @[JSON::Field(key: "visitDurationMinutes")]
+ getter visit_duration_minutes : Int32
+
+ @[JSON::Field(key: "timeZone")]
+ getter time_zone : String
+
+ @[JSON::Field(key: "deviceClassification")]
+ getter device_classification : String
+
+ @[JSON::Field(key: "visitClassification")]
+ getter visit_classification : String
+
+ @[JSON::Field(key: "activeDevicesCount")]
+ getter active_devices_count : Int32
+
+ @[JSON::Field(key: "inActiveDevicesCount")]
+ getter inactive_devices_count : Int32
+end
diff --git a/drivers/cisco/dna_spaces/events.cr b/drivers/cisco/dna_spaces/events.cr
new file mode 100644
index 00000000000..db66a992585
--- /dev/null
+++ b/drivers/cisco/dna_spaces/events.cr
@@ -0,0 +1,118 @@
+require "json"
+require "../dna_spaces"
+require "./location"
+require "./device"
+require "./*"
+
+# This is used to map the various events into a simpler data structure
+abstract class Cisco::DNASpaces::Events
+ include JSON::Serializable
+
+ # event type hint
+ use_json_discriminator "eventType", {
+ "KEEP_ALIVE" => KeepAlive,
+ "DEVICE_ENTRY" => DeviceEntryWrapper,
+ "DEVICE_EXIT" => DeviceExitWrapper,
+ "PROFILE_UPDATE" => ProfileUpdateWrapper,
+ "LOCATION_CHANGE" => LocationChangeWrapper,
+ "DEVICE_LOCATION_UPDATE" => DeviceLocationUpdateWrapper,
+ "TP_PEOPLE_COUNT_UPDATE" => PeopleCountUpdateWrapper,
+ "DEVICE_PRESENCE" => DevicePresenceWrapper,
+ "USER_PRESENCE" => UserPresenceWrapper,
+ "APP_ACTIVATION" => AppActivactionWrapper,
+ "DEVICE_COUNT" => DeviceCountWrapper,
+ }
+
+ @[JSON::Field(key: "recordUid")]
+ getter record_uid : String
+
+ @[JSON::Field(key: "recordTimestamp")]
+ getter record_timestamp : Int64
+
+ @[JSON::Field(key: "spacesTenantId")]
+ getter spaces_tenant_id : String
+
+ @[JSON::Field(key: "spacesTenantName")]
+ getter spaces_tenant_name : String
+
+ @[JSON::Field(key: "partnerTenantId")]
+ getter partner_tenant_id : String
+end
+
+class Cisco::DNASpaces::KeepAlive < Cisco::DNASpaces::Events
+ getter eventType : String = "KEEP_ALIVE"
+
+ def payload
+ nil
+ end
+end
+
+class Cisco::DNASpaces::DeviceEntryWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "DEVICE_ENTRY"
+
+ @[JSON::Field(key: "deviceEntry")]
+ getter payload : DeviceEntry
+end
+
+class Cisco::DNASpaces::DeviceExitWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "DEVICE_EXIT"
+
+ @[JSON::Field(key: "deviceExit")]
+ getter payload : DeviceExit
+end
+
+class Cisco::DNASpaces::ProfileUpdateWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "PROFILE_UPDATE"
+
+ @[JSON::Field(key: "deviceProfileUpdate")]
+ getter payload : Device
+end
+
+class Cisco::DNASpaces::LocationChangeWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "LOCATION_CHANGE"
+
+ @[JSON::Field(key: "locationHierarchyChange")]
+ getter payload : LocationChange
+end
+
+class Cisco::DNASpaces::DeviceLocationUpdateWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "DEVICE_LOCATION_UPDATE"
+
+ @[JSON::Field(key: "deviceLocationUpdate")]
+ getter payload : DeviceLocationUpdate
+end
+
+class Cisco::DNASpaces::PeopleCountUpdateWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "TP_PEOPLE_COUNT_UPDATE"
+
+ @[JSON::Field(key: "tpPeopleCountUpdate")]
+ getter payload : PeopleCountUpdate
+end
+
+class Cisco::DNASpaces::DevicePresenceWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "DEVICE_PRESENCE"
+
+ @[JSON::Field(key: "devicePresence")]
+ getter payload : DevicePresence
+end
+
+class Cisco::DNASpaces::UserPresenceWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "USER_PRESENCE"
+
+ @[JSON::Field(key: "userPresence")]
+ getter payload : UserPresence
+end
+
+class Cisco::DNASpaces::AppActivactionWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "APP_ACTIVATION"
+
+ @[JSON::Field(key: "appActivation")]
+ getter payload : AppActivaction
+end
+
+class Cisco::DNASpaces::DeviceCountWrapper < Cisco::DNASpaces::Events
+ getter eventType : String = "DEVICE_COUNT"
+
+ @[JSON::Field(key: "deviceCounts")]
+ getter payload : DeviceCount
+end
diff --git a/drivers/cisco/dna_spaces/location.cr b/drivers/cisco/dna_spaces/location.cr
new file mode 100644
index 00000000000..12e2228beca
--- /dev/null
+++ b/drivers/cisco/dna_spaces/location.cr
@@ -0,0 +1,30 @@
+require "./events"
+
+class Cisco::DNASpaces::Location
+ include JSON::Serializable
+
+ @[JSON::Field(key: "locationId")]
+ getter location_id : String
+ getter name : String
+
+ # TODO:: this might be better as an enum
+ # if there are only limited types
+ @[JSON::Field(key: "inferredLocationTypes")]
+ getter tags : Array(String)
+
+ getter parent : Location?
+
+ # Maps tag names to location_ids
+ def details(mappings = {} of String => String)
+ parent.try &.details(mappings)
+ tags.each { |tag| mappings[tag] = location_id }
+ mappings
+ end
+
+ # Maps location_ids to location names
+ def descriptions(mappings = {} of String => String)
+ parent.try &.descriptions(mappings)
+ mappings[location_id] = name
+ mappings
+ end
+end
diff --git a/drivers/cisco/dna_spaces/location_change.cr b/drivers/cisco/dna_spaces/location_change.cr
new file mode 100644
index 00000000000..6043e172f14
--- /dev/null
+++ b/drivers/cisco/dna_spaces/location_change.cr
@@ -0,0 +1,32 @@
+require "./events"
+
+class Cisco::DNASpaces::LocationChange
+ include JSON::Serializable
+
+ @[JSON::Field(key: "changeType")]
+ getter change_type : String
+ getter location : Location
+
+ class Metadata
+ include JSON::Serializable
+
+ getter key : String
+ getter values : Array(String)
+ end
+
+ class LocationDetails
+ include JSON::Serializable
+
+ @[JSON::Field(key: "timeZone")]
+ getter time_zone : String
+ getter city : String
+ getter state : String
+ getter country : String
+ getter category : String
+
+ getter latitude : Float64
+ getter longitude : Float64
+
+ getter metadata : Array(Metadata)
+ end
+end
diff --git a/drivers/cisco/dna_spaces/location_details.cr b/drivers/cisco/dna_spaces/location_details.cr
new file mode 100644
index 00000000000..c69048af78f
--- /dev/null
+++ b/drivers/cisco/dna_spaces/location_details.cr
@@ -0,0 +1,16 @@
+require "./events"
+
+class Cisco::DNASpaces::LocationDetails
+ include JSON::Serializable
+
+ @[JSON::Field(key: "timeZone")]
+ getter time_zone : String
+
+ getter city : String
+ getter state : String
+ getter country : String
+ getter category : String
+
+ getter latitude : Float64
+ getter longitude : Float64
+end
diff --git a/drivers/cisco/dna_spaces/map_info.cr b/drivers/cisco/dna_spaces/map_info.cr
new file mode 100644
index 00000000000..31be13841d7
--- /dev/null
+++ b/drivers/cisco/dna_spaces/map_info.cr
@@ -0,0 +1,30 @@
+require "./events"
+
+class Cisco::DNASpaces::Dimension
+ include JSON::Serializable
+
+ getter length : Float64
+ getter width : Float64
+ getter height : Float64
+
+ @[JSON::Field(key: "offsetX")]
+ getter offset_x : Float64
+
+ @[JSON::Field(key: "offsetY")]
+ getter offset_y : Float64
+end
+
+class Cisco::DNASpaces::MapInfo
+ include JSON::Serializable
+
+ @[JSON::Field(key: "mapId")]
+ getter id : String
+
+ @[JSON::Field(key: "imageWidth")]
+ getter image_width : Float64
+
+ @[JSON::Field(key: "imageHeight")]
+ getter image_height : Float64
+
+ getter dimension : Cisco::DNASpaces::Dimension
+end
diff --git a/drivers/cisco/dna_spaces/people_count_update.cr b/drivers/cisco/dna_spaces/people_count_update.cr
new file mode 100644
index 00000000000..99d8f35cd22
--- /dev/null
+++ b/drivers/cisco/dna_spaces/people_count_update.cr
@@ -0,0 +1,32 @@
+require "./events"
+
+# This is triggered from telepresence devices
+class Cisco::DNASpaces::PeopleCountUpdate
+ include JSON::Serializable
+
+ @[JSON::Field(key: "tpDeviceId")]
+ getter tp_device_id : String
+ getter location : Location
+ getter presence : Bool
+
+ @[JSON::Field(key: "peopleCount")]
+ getter people_count : Int32
+
+ @[JSON::Field(key: "standbyState")]
+ getter standby_state : Int32
+
+ @[JSON::Field(key: "ambientNoise")]
+ getter ambient_noise : Int32
+
+ @[JSON::Field(key: "drynessScore")]
+ getter dryness_score : Int32
+
+ @[JSON::Field(key: "activeCalls")]
+ getter active_calls : Int32
+
+ @[JSON::Field(key: "presentationState")]
+ getter presentation_state : Int32
+
+ @[JSON::Field(key: "timeStamp")]
+ getter timestamp : Int64
+end
diff --git a/drivers/cisco/dna_spaces/user_presence.cr b/drivers/cisco/dna_spaces/user_presence.cr
new file mode 100644
index 00000000000..1882f4b25a2
--- /dev/null
+++ b/drivers/cisco/dna_spaces/user_presence.cr
@@ -0,0 +1,83 @@
+require "./events"
+
+class Cisco::DNASpaces::UserPresence
+ include JSON::Serializable
+
+ class User
+ include JSON::Serializable
+
+ @[JSON::Field(key: "userId")]
+ getter user_id : String
+
+ @[JSON::Field(key: "deviceIds")]
+ getter device_ids : Array(String)
+ getter tags : Array(String)
+ getter mobile : String
+ getter email : String
+ getter gender : String
+
+ @[JSON::Field(key: "firstName")]
+ getter first_name : String
+
+ @[JSON::Field(key: "lastName")]
+ getter last_name : String
+
+ @[JSON::Field(key: "postalCode")]
+ getter postal_code : String
+
+ # otherFields
+ # socialNetworkInfo
+ end
+
+ class UserCount
+ include JSON::Serializable
+
+ @[JSON::Field(key: "usersWithUserId")]
+ getter users_with_user_id : Int64
+
+ @[JSON::Field(key: "usersWithoutUserId")]
+ getter users_without_user_id : Int64
+
+ @[JSON::Field(key: "totalUsers")]
+ getter total_users : Int64
+ end
+
+ @[JSON::Field(key: "presenceEventType")]
+ getter presence_event_type : String
+
+ @[JSON::Field(key: "wasInActive")]
+ getter was_in_active : Bool
+
+ getter user : User
+ getter location : Location
+
+ @[JSON::Field(key: "rawUserId")]
+ getter raw_user_id : String
+
+ @[JSON::Field(key: "visitId")]
+ getter visit_id : String
+
+ @[JSON::Field(key: "entryTimestamp")]
+ getter entry_timestamp : Int64
+
+ @[JSON::Field(key: "entryDateTime")]
+ getter entry_datetime : String
+
+ @[JSON::Field(key: "exitTimestamp")]
+ getter exit_timestamp : Int64
+
+ @[JSON::Field(key: "exitDateTime")]
+ getter exit_datetime : String
+
+ @[JSON::Field(key: "visitDurationMinutes")]
+ getter visit_duration_minutes : Int32
+
+ @[JSON::Field(key: "timeZone")]
+ getter time_zone : String
+
+ @[JSON::Field(key: "activeUsersCount")]
+ getter active_users_count : UserCount
+
+ @[JSON::Field(key: "inActiveUsersCount")]
+ getter inactive_users_count : UserCount
+end
diff --git a/drivers/cisco/dna_spaces_spec.cr b/drivers/cisco/dna_spaces_spec.cr
new file mode 100644
index 00000000000..c3df6cba22c
--- /dev/null
+++ b/drivers/cisco/dna_spaces_spec.cr
@@ -0,0 +1,16 @@
+DriverSpecs.mock_driver "Cisco::DNASpaces" do
+ # The dashboard should request the streaming API
+ expect_http_request do |request, response|
+ headers = request.headers
+ if headers["X-API-KEY"]? == "X-API-KEY"
+ response.headers["Transfer-Encoding"] = "chunked"
+ response.status_code = 200
+ response << %({"recordUid":"event-85b84f15","recordTimestamp":1605502585236,"spacesTenantId":"","spacesTenantName":"","partnerTenantId":"","eventType":"KEEP_ALIVE"})
+ else
+ response.status_code = 401
+ end
+ end
+
+ # Should standardise the format of MAC addresses
+ exec(:format_mac, "0x12:34:A6-789B").get.should eq %(1234a6789b)
+end
diff --git a/drivers/cisco/meraki/captive_portal.cr b/drivers/cisco/meraki/captive_portal.cr
new file mode 100644
index 00000000000..5866d8d181b
--- /dev/null
+++ b/drivers/cisco/meraki/captive_portal.cr
@@ -0,0 +1,145 @@
+module Cisco; end
+
+module Cisco::Meraki; end
+
+require "json"
+require "openssl"
+
+class Cisco::Meraki::CaptivePortal < PlaceOS::Driver
+ # Discovery Information
+ descriptive_name "Cisco Meraki Captive Portal"
+ generic_name :CaptivePortal
+ description %(
+ for more information visit: https://meraki.cisco.com/lib/pdf/meraki_whitepaper_captive_portal.pdf
+ )
+
+ default_settings({
+ wifi_secret: "anything really",
+ default_timezone: "Australia/Sydney",
+ date_format: "%Y%m%d",
+ # duration of access in hours
+ access_duration: 12,
+ # Length of the clients wifi code
+ code_length: 4,
+ success_url: "https://company.com/welcome",
+ })
+
+ def on_load
+ on_update
+ end
+
+ @wifi_secret : String = ""
+ @date_format : String = "%Y%m%d"
+ @success_url : String = "https://place.technology/"
+ @default_timezone : Time::Location = Time::Location.load("Australia/Sydney")
+ @access_duration : Time::Span = 12.hours
+ @code_length : Int32 = 4
+
+ @denied : UInt64 = 0_u64
+ @granted : UInt64 = 0_u64
+ @errors : UInt64 = 0_u64
+
+ @guests : Hash(String, ChallengePayload) = {} of String => ChallengePayload
+
+ def on_update
+ @wifi_secret = setting?(String, :wifi_secret) || "anything really"
+ @date_format = setting?(String, :date_format) || "%Y%m%d"
+ @success_url = setting?(String, :success_url) || "https://place.technology/"
+ @access_duration = (setting?(Int32, :access_duration) || 12).hours
+ @code_length = setting?(Int32, :code_length) || 4
+
+ time_zone = setting?(String, :default_timezone).presence
+ @default_timezone = Time::Location.load(time_zone) if time_zone
+ end
+
+ @[Security(Level::Support)]
+ def guests
+ @guests
+ end
+
+ @[Security(Level::Support)]
+ def lookup(mac : String)
+ @guests[format_mac(mac)]
+ end
+
+ def generate_guest_data(email : String, time : Int64, time_zone : String? = nil)
+ time_zone = time_zone.presence ? Time::Location.load(time_zone.not_nil!) : @default_timezone
+ date = Time.unix(time).in(time_zone).to_s(@date_format)
+ guest_string = "#{email.downcase}-#{date}-#{@wifi_secret}"
+
+ OpenSSL::Digest.new("SHA256").update(guest_string).final.hexstring
+ end
+
+ # Splits the SHA256 into code length and then randomly selects one
+ def generate_guest_token(email : String, time : Int64, time_zone : String? = nil)
+ generate_guest_data(email, time, time_zone).scan(/.{#{@code_length}}/).sample(1)[0][0]
+ end
+
+ class ChallengePayload
+ include JSON::Serializable
+
+ property ap_mac : String
+ property client_ip : String
+ property client_mac : String
+ property base_grant_url : String
+ property user_continue : String?
+
+ # key they were provided in their invite email
+ property code : String
+ property email : String
+ property timezone : String?
+
+ property expires : Time? = nil
+ end
+
+ EMPTY_HEADERS = {} of String => String
+ JSON_HEADERS = {
+ "Content-Type" => "application/json",
+ }
+
+ # Webhook for providing guest access
+ def challenge(method : String, headers : Hash(String, Array(String)), body : String)
+ logger.debug { "guest access attempt: #{method},\nheaders #{headers},\nbody #{body}" }
+
+ challenge = ChallengePayload.from_json(body)
+
+ check_code = challenge.code
+ guest_codes = generate_guest_data(challenge.email, Time.utc.to_unix, challenge.timezone)
+ matched = guest_codes.scan(/.{#{@code_length}}/).count { |code| code[0] == check_code } > 0
+
+ if matched
+ challenge.expires = @access_duration.from_now
+ @guests[format_mac(challenge.client_mac)] = challenge
+ @granted += 1_u64
+ self[:granted_access] = @granted
+
+ redirect_url = "#{challenge.base_grant_url}?duration=#{@access_duration.to_i}&continue_url=#{challenge.user_continue || @success_url}"
+ response = {
+ redirect_to: redirect_url,
+ }.to_json
+
+ logger.debug { "successful joined network #{challenge.inspect}" }
+
+ # Redirect to the success URL
+ {HTTP::Status::OK, JSON_HEADERS, response}
+ else
+ @denied += 1_u64
+ self[:denied_access] = @denied
+
+ logger.debug { "failed wifi access attempt by #{challenge.inspect}" }
+
+ {HTTP::Status::NOT_ACCEPTABLE, JSON_HEADERS, "{}"}
+ end
+ rescue error
+ @errors += 1_u64
+ self[:errors] = @errors
+ last_error = error.inspect_with_backtrace
+ self[:last_error] = last_error
+ logger.error { "failed to parse wifi challenge payload\n#{error}" }
+ {HTTP::Status::INTERNAL_SERVER_ERROR, EMPTY_HEADERS, nil}
+ end
+
+ protected def format_mac(address : String)
+ address.gsub(/(0x|[^0-9A-Fa-f])*/, "").downcase
+ end
+end
diff --git a/drivers/cisco/meraki/captive_portal_spec.cr b/drivers/cisco/meraki/captive_portal_spec.cr
new file mode 100644
index 00000000000..a627b8b4b81
--- /dev/null
+++ b/drivers/cisco/meraki/captive_portal_spec.cr
@@ -0,0 +1,15 @@
+require "openssl"
+
+DriverSpecs.mock_driver "Cisco::Meraki::CaptivePortal" do
+ date = Time.unix(1599477274).in(Time::Location.load("Australia/Sydney")).to_s("%Y%m%d")
+ hexdigest = OpenSSL::Digest.new("SHA256").update("guest@email.com-#{date}-anything really").final.hexstring
+
+ # Check the hex codes match
+ retval = exec(:generate_guest_data, "guest@email.com", 1599477274, "Australia/Sydney")
+ retval.get.should eq hexdigest
+
+ # check it matches on of the codes
+ codes = hexdigest.scan(/.{4}/).map { |code| code[0] }
+ retval = exec(:generate_guest_token, "guest@email.com", 1599477274, "Australia/Sydney")
+ codes.includes?(retval.get.not_nil!.as_s).should eq true
+end
diff --git a/drivers/cisco/meraki/dashboard.cr b/drivers/cisco/meraki/dashboard.cr
new file mode 100644
index 00000000000..91fe6ce9ee9
--- /dev/null
+++ b/drivers/cisco/meraki/dashboard.cr
@@ -0,0 +1,847 @@
+module Cisco; end
+
+module Cisco::Meraki; end
+
+require "uri"
+require "json"
+require "s2_cells"
+require "link-header"
+require "./scanning_api"
+require "placeos-driver/interface/locatable"
+
+class Cisco::Meraki::Dashboard < PlaceOS::Driver
+ include Interface::Locatable
+
+ # Discovery Information
+ descriptive_name "Cisco Meraki Dashboard"
+ generic_name :Dashboard
+ uri_base "https://api.meraki.com"
+ description %(
+ for more information visit:
+ * Dashboard API: https://documentation.meraki.com/zGeneral_Administration/Other_Topics/The_Cisco_Meraki_Dashboard_API
+ * Scanning API: https://developer.cisco.com/meraki/scanning-api/#!introduction/scanning-api
+
+ NOTE:: API Call volume is rate limited to 5 calls per second per organization
+ )
+
+ default_settings({
+ meraki_validator: "configure if scanning API is enabled",
+ meraki_secret: "configure if scanning API is enabled",
+ meraki_api_key: "configure for the dashboard API",
+
+ # We will always accept a reading with a confidence lower than this
+ acceptable_confidence: 5.0,
+
+ # Max Uncertainty in meters - we don't accept positions that are less certain
+ maximum_uncertainty: 25.0,
+
+ # can we use the meraki dashboard API for user lookups
+ default_network_id: "network_id",
+
+ # Max requests a second made to the dashboard
+ rate_limit: 4,
+
+ # Area index each point on a floor lands on
+ # 21 == ~4 meters squared, which given wifi variance is good enough for tracing
+ # S2 cell levels: https://s2geometry.io/resources/s2cell_statistics.html
+ s2_level: 21,
+ debug_webhook: false,
+
+ # Level mappings, level name for human readability
+ floorplan_mappings: {
+ "g_727894289773756672" => {
+ "building": "zone-12345",
+ "level": "zone-123456",
+ "level_name": "BUILDING - L1",
+ },
+ },
+
+ # Time before a user location is considered probably too old
+ max_location_age: 10,
+
+ # Ignore certain usernames from the dashboard
+ ignore_usernames: ["host/"],
+
+ # Enable / Disable dashboard username lookup completely
+ disable_username_lookup: false,
+ })
+
+ def on_load
+ # We want to store our user => mac_address mappings in redis
+ @user_mac_mappings = PlaceOS::Driver::RedisStorage.new(module_id, "user_macs")
+ spawn { rate_limiter }
+ on_update
+ end
+
+ @scanning_validator : String = ""
+ @scanning_secret : String = ""
+ @api_key : String = ""
+
+ @acceptable_confidence : Float64 = 5.0
+ @maximum_uncertainty : Float64 = 25.0
+
+ @time_multiplier : Float64 = 0.0
+ @confidence_multiplier : Float64 = 0.0
+ @max_location_age : Time::Span = 6.minutes
+ @drift_location_age : Time::Span = 4.minutes
+ @confidence_time : Time::Span = 2.minutes
+
+ @rate_limit : Int32 = 4
+ @channel : Channel(Nil) = Channel(Nil).new(1)
+ @queue_lock : Mutex = Mutex.new
+ @queue_size = 0
+ @wait_time : Time::Span = 300.milliseconds
+
+ @storage_lock : Mutex = Mutex.new
+ @user_mac_mappings : PlaceOS::Driver::RedisStorage? = nil
+ @default_network : String = ""
+ @floorplan_mappings : Hash(String, Hash(String, String | Float64)) = Hash(String, Hash(String, String | Float64)).new
+ @floorplan_sizes = {} of String => FloorPlan
+ @network_devices = {} of String => NetworkDevice
+
+ @s2_level : Int32 = 21
+ @debug_webhook : Bool = false
+ @debug_payload : Bool = false
+ @ignore_usernames : Array(String) = [] of String
+
+ def on_update
+ @scanning_validator = setting?(String, :meraki_validator) || ""
+ @scanning_secret = setting?(String, :meraki_secret) || ""
+ @api_key = setting?(String, :meraki_api_key) || ""
+
+ @rate_limit = setting?(Int32, :rate_limit) || 4
+ @wait_time = 1.second / @rate_limit
+
+ @default_network = setting?(String, :default_network_id) || ""
+
+ @acceptable_confidence = setting?(Float64, :acceptable_confidence) || 5.0
+ @maximum_uncertainty = setting?(Float64, :maximum_uncertainty) || 25.0
+
+ @max_location_age = (setting?(UInt32, :max_location_age) || 6).minutes
+ # Age we keep a confident value (without drifting towards less confidence)
+ @confidence_time = @max_location_age / 3
+ # Age at which we discard a drifting value (accepting a less confident value)
+ @drift_location_age = @max_location_age - @confidence_time
+
+ # How much confidence do we have in this new value, relative to an old confident value
+ @time_multiplier = 1.0_f64 / (@drift_location_age.to_i - @confidence_time.to_i).to_f64
+ @confidence_multiplier = 1.0_f64 / (@maximum_uncertainty.to_i - @acceptable_confidence.to_i).to_f64
+
+ @floorplan_mappings = setting?(Hash(String, Hash(String, String | Float64)), :floorplan_mappings) || @floorplan_mappings
+
+ @s2_level = setting?(Int32, :s2_level) || 21
+ @debug_webhook = setting?(Bool, :debug_webhook) || false
+ @debug_payload = setting?(Bool, :debug_payload) || false
+
+ @ignore_usernames = setting?(Array(String), :ignore_usernames) || [] of String
+ disable_username_lookup = setting?(Bool, :disable_username_lookup) || false
+
+ schedule.clear
+ if @default_network.presence
+ schedule.every(2.minutes) { map_users_to_macs } unless disable_username_lookup
+ schedule.every(29.minutes, immediate: true) { sync_floorplan_sizes }
+ end
+ schedule.every(30.minutes) { cleanup_caches }
+ end
+
+ protected def user_mac_mappings
+ @storage_lock.synchronize {
+ yield @user_mac_mappings.not_nil!
+ }
+ end
+
+ # Perform fetch with the required API request limits in place
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def fetch(location : String)
+ req(location) { |response| response.body }
+ end
+
+ protected def req(location : String)
+ if (@wait_time * @queue_size) > 10.seconds
+ raise "wait time would be exceeded for API request, #{@queue_size} requests already queued"
+ end
+
+ @queue_lock.synchronize { @queue_size += 1 }
+ @channel.receive
+ @queue_lock.synchronize { @queue_size -= 1 }
+
+ headers = HTTP::Headers{
+ "X-Cisco-Meraki-API-Key" => @api_key,
+ "Content-Type" => "application/json",
+ "Accept" => "application/json",
+ "User-Agent" => "PlaceOS/2.0 PlaceTechnology",
+ }
+
+ uri = URI.parse(location)
+ response = if uri.host.nil?
+ get(location, headers: headers)
+ else
+ HTTP::Client.get(location, headers: headers)
+ end
+
+ if response.success?
+ yield response
+ elsif response.status.found?
+ # Meraki might return a `302` on GET requests
+ response = HTTP::Client.get(response.headers["Location"], headers: headers)
+ if response.success?
+ yield response
+ else
+ raise "request #{location} failed with status: #{response.status_code}"
+ end
+ else
+ raise "request #{location} failed with status: #{response.status_code}"
+ end
+ end
+
+ EMPTY_HEADERS = {} of String => String
+ SUCCESS_RESPONSE = {HTTP::Status::OK, EMPTY_HEADERS, nil}
+
+ struct Lookup
+ include JSON::Serializable
+
+ property time : Time
+ property mac : String
+
+ def initialize(@time, @mac)
+ end
+ end
+
+ # MAC Address => Location
+ @locations : Hash(String, Location) = {} of String => Location
+ @ip_lookup : Hash(String, Lookup) = {} of String => Lookup
+
+ def lookup_ip(address : String)
+ @ip_lookup[address.downcase]?
+ end
+
+ def locate_mac(address : String)
+ @locations[format_mac(address)]?
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def inspect_foorplans
+ @floorplan_sizes
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def inspect_network_devices
+ @network_devices
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def inspect_state
+ logger.debug {
+ "IP Mappings: #{@ip_lookup.keys}\n\nMAC Locations: #{@locations.keys}\n\nClient Details: #{@client_details.keys}"
+ }
+ {ip_mappings: @ip_lookup.size, tracking: @locations.size, client_details: @client_details.size}
+ end
+
+ # Returns the list of users who can be located
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def locateable
+ too_old = location_max_age = @max_location_age.ago
+ @client_details.compact_map do |mac, client|
+ location = @locations[mac]?
+ client.user if location && ((location.time > too_old) || (client.time_added > too_old))
+ end
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def poll_clients(network_id : String? = nil, timespan : UInt32 = 900_u32)
+ network_id = network_id.presence || @default_network
+
+ clients = [] of Client
+ next_page = "/api/v1/networks/#{network_id}/clients?perPage=1000×pan=#{timespan}"
+
+ loop do
+ break unless next_page
+
+ next_page = req(next_page) do |response|
+ clients.concat Array(Client).from_json(response.body)
+ LinkHeader.new(response)["next"]?
+ end
+ end
+
+ clients
+ end
+
+ @client_details : Hash(String, Client) = {} of String => Client
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def map_users_to_macs(network_id : String? = nil)
+ network_id = network_id.presence || @default_network
+
+ logger.debug { "mapping users to device MACs" }
+ clients = poll_clients(network_id)
+
+ new_devices = 0
+ updated_dev = 0
+ now = Time.utc
+
+ logger.debug { "mapping found #{clients.size} devices" }
+
+ user_mac_mappings do |storage|
+ clients.each do |client|
+ # So we can merge additional details into device location responses
+ user_mac = format_mac(client.mac)
+ client.time_added = now
+
+ user_id = client.user
+
+ if user_id
+ @ignore_usernames.each do |name|
+ if user_id.starts_with?(name)
+ client.user = user_id = nil
+ break
+ end
+ end
+ end
+
+ # Attempt to lookup username via learning
+ if user_id.nil?
+ if known_id = storage[user_mac]?
+ client.user = known_id
+ end
+ end
+
+ @client_details[user_mac] = client
+ next unless user_id
+
+ was_update, was_new = map_user_mac(user_mac, user_id, storage)
+ updated_dev += 1 if was_update
+ new_devices += 1 if was_new
+ end
+ end
+
+ logger.debug { "mapping assigned #{new_devices} new devices, #{updated_dev} user updated" }
+ nil
+ end
+
+ protected def map_user_mac(user_mac, user_id, storage)
+ updated_dev = false
+ new_devices = false
+ user_id = format_username(user_id)
+
+ # Check if mac mapping already exists
+ existing_user = storage[user_mac]?
+ return {false, false} if existing_user == user_id
+
+ # Remove any pervious mappings
+ if existing_user
+ updated_dev = true
+ if user_macs = storage[existing_user]?
+ macs = Array(String).from_json(user_macs)
+ macs.delete(user_mac)
+ storage[existing_user] = macs.to_json
+ end
+ else
+ new_devices = true
+ end
+
+ # Update the user mappings
+ storage[user_mac] = user_id
+ macs = if user_macs = storage[user_id]?
+ tmp_macs = Array(String).from_json(user_macs)
+ tmp_macs.unshift(user_mac)
+ tmp_macs.uniq!
+ tmp_macs[0...9]
+ else
+ [user_mac]
+ end
+ storage[user_id] = macs
+
+ {updated_dev, new_devices}
+ end
+
+ def format_username(user : String)
+ if user.includes? "@"
+ user = user.split("@")[0]
+ elsif user.includes? "\\"
+ user = user.split("\\")[1]
+ end
+ user.downcase
+ end
+
+ def macs_assigned_to(email : String? = nil, username : String? = nil) : Array(String)
+ username = format_username(username.presence || email.presence.not_nil!)
+ if macs = user_mac_mappings { |s| s[username]? }
+ Array(String).from_json(macs)
+ else
+ [] of String
+ end
+ end
+
+ def check_ownership_of(mac_address : String) : OwnershipMAC?
+ lookup = format_mac(mac_address)
+ if user = user_mac_mappings { |s| s[lookup]? }
+ {
+ location: "wireless",
+ assigned_to: user,
+ mac_address: lookup,
+ }
+ end
+ end
+
+ # returns locations based on most recently seen
+ # versus most accurate location
+ def locate_user(email : String? = nil, username : String? = nil)
+ username = format_username(username.presence || email.presence.not_nil!)
+
+ if macs = user_mac_mappings { |s| s[username]? }
+ location_max_age = @max_location_age.ago
+
+ Array(String).from_json(macs).compact_map { |mac|
+ if location = locate_mac(mac)
+ client = @client_details[mac]?
+
+ # We set these here to speed up processing
+ location.client = client
+ location.mac = mac
+
+ if client && client.time_added > location_max_age
+ location
+ elsif location.time > location_max_age
+ location
+ end
+ end
+ }.sort { |a, b|
+ b.time <=> a.time
+ }.map { |location|
+ lat = location.lat
+ lon = location.lng
+
+ loc = {
+ "location" => "wireless",
+ "coordinates_from" => "bottom-left",
+ "x" => location.x,
+ "y" => location.y,
+ "lon" => lon,
+ "lat" => lat,
+ "s2_cell_id" => lat ? S2Cells::LatLon.new(lat.not_nil!, lon.not_nil!).to_token(@s2_level) : nil,
+ "mac" => location.mac,
+ "variance" => location.variance,
+ "last_seen" => location.time.to_unix,
+ "meraki_floor_id" => location.floor_plan_id,
+ "meraki_floor_name" => location.floor_plan_name,
+ }
+
+ # Add our zone IDs to the response
+ if level_data = @floorplan_mappings[location.floor_plan_id]?
+ level_data.each { |k, v| loc[k] = v }
+ end
+
+ # Add meraki map information to the response
+ if map_size = @floorplan_sizes[location.floor_plan_id]?
+ loc["map_width"] = map_size.width
+ loc["map_height"] = map_size.height
+ end
+
+ # Add additional client information if it's available
+ if client = location.client
+ loc["manufacturer"] = client.manufacturer if client.manufacturer
+ loc["os"] = client.os if client.os
+ loc["ssid"] = client.ssid if client.ssid
+ end
+
+ loc
+ }
+ else
+ [] of Nil
+ end
+ end
+
+ def device_locations(zone_id : String, location : String? = nil)
+ logger.debug { "looking up device locations in #{zone_id}" }
+ return [] of Nil if location.presence && location != "wireless"
+
+ # Find the floors associated with the provided zone id
+ floors = [] of String
+ @floorplan_mappings.each do |floor_id, data|
+ floors << floor_id if data.values.includes?(zone_id)
+ end
+ logger.debug { "found matching meraki floors: #{floors}" }
+ return [] of Nil if floors.empty?
+
+ checking_count = @locations.size
+ wrong_floor = 0
+ too_old = 0
+
+ # Find the devices that are on the matching floors
+ oldest_location = @max_location_age.ago
+ matching = @locations.compact_map do |mac, loc|
+ # We set this here to speed up processing
+ client = @client_details[mac]?
+ loc.client = client
+
+ if loc.time < oldest_location
+ if client
+ if client.time_added < oldest_location
+ too_old += 1
+ next
+ end
+ else
+ too_old += 1
+ next
+ end
+ end
+ if !floors.includes?(loc.floor_plan_id)
+ wrong_floor += 1
+ next
+ end
+ # ensure the formatted mac is being used
+ loc.mac = mac
+ loc
+ end
+
+ logger.debug { "found #{matching.size} matching devices\nchecked #{checking_count} locations, #{wrong_floor} were on the wrong floor, #{too_old} were too old" }
+
+ # Build the payload on the matching locations
+ matching.group_by(&.floor_plan_id).flat_map { |floor_id, locations|
+ map_width = -1.0
+ map_height = -1.0
+
+ if map_size = @floorplan_sizes[floor_id]?
+ map_width = map_size.width
+ map_height = map_size.height
+ elsif mappings = @floorplan_mappings[floor_id]?
+ map_width = (mappings["width"]? || map_width).as(Float64)
+ map_height = (mappings["height"]? || map_width).as(Float64)
+ end
+
+ locations.map do |loc|
+ lat = loc.lat
+ lon = loc.lng
+
+ # Add additional client information if it's available
+ if client = @client_details[loc.mac]?
+ manufacturer = client.manufacturer
+ os = client.os
+ ssid = client.ssid
+ end
+
+ {
+ location: :wireless,
+ coordinates_from: "bottom-left",
+ x: loc.x,
+ y: loc.y,
+ lon: lon,
+ lat: lat,
+ s2_cell_id: lat ? S2Cells::LatLon.new(lat.not_nil!, lon.not_nil!).to_token(@s2_level) : nil,
+ mac: loc.mac,
+ variance: loc.variance,
+ last_seen: loc.time.to_unix,
+ map_width: map_width,
+ map_height: map_height,
+ manufacturer: manufacturer,
+ os: os,
+ ssid: ssid,
+ }
+ end
+ }
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def cleanup_caches : Nil
+ logger.debug { "removing IP and location data that is over 30 minutes old" }
+
+ # IP => MAC mappings
+ old = 30.minutes.ago
+ remove_keys = [] of String
+ @ip_lookup.each { |ip, lookup| remove_keys << ip if lookup.time < old }
+ remove_keys.each { |ip| @ip_lookup.delete(ip) }
+ logger.debug { "removed #{remove_keys.size} IP => MAC mappings" }
+
+ # IP => Username mappings
+ remove_keys.clear
+ @ip_usernames.each { |ip, lookup| remove_keys << ip if lookup.time < old }
+ remove_keys.each { |ip| @ip_usernames.delete(ip) }
+ logger.debug { "removed #{remove_keys.size} IP => Username mappings" }
+
+ # Client details
+ remove_keys.clear
+ @client_details.each { |mac, client| remove_keys << mac if client.time_added < old }
+ remove_keys.each { |mac| @client_details.delete(mac) }
+ logger.debug { "removed #{remove_keys.size} client details" }
+
+ # MACs
+ remove_keys.clear
+ @locations.each do |mac, location|
+ if location.time < old
+ if client = @client_details[mac]?
+ remove_keys << mac if client.time_added < old
+ else
+ remove_keys << mac
+ end
+ end
+ end
+ remove_keys.each { |mac| @locations.delete(mac) }
+ logger.debug { "removed #{remove_keys.size} MACs" }
+ end
+
+ @[Security(PlaceOS::Driver::Level::Support)]
+ def sync_floorplan_sizes(network_id : String? = nil)
+ network_id = network_id.presence || @default_network
+ logger.debug { "syncing floor plan sizes for network #{network_id}" }
+
+ floor_plans = {} of String => FloorPlan
+
+ req("/api/v1/networks/#{network_id}/floorPlans") { |response|
+ Array(FloorPlan).from_json(response.body).each do |plan|
+ floor_plans[plan.id] = plan
+ end
+ nil
+ }
+
+ @floorplan_sizes = floor_plans
+
+ # mac address => device location
+ network_devices = {} of String => NetworkDevice
+
+ req("/api/v1/networks/#{network_id}/devices") { |response|
+ Array(NetworkDevice).from_json(response.body).each do |device|
+ next unless device.floor_plan_id
+ network_devices[format_mac(device.mac)] = device
+ end
+ nil
+ }
+
+ @network_devices = network_devices
+
+ {floor_plans, network_devices}
+ end
+
+ # Webhook endpoint for scanning API, expects version 3
+ def scanning_api(method : String, headers : Hash(String, Array(String)), body : String)
+ logger.debug { "scanning API received: #{method},\nheaders #{headers},\nbody size #{body.size}" }
+ logger.debug { body } if @debug_payload
+
+ # Return the scanning API validator code on a GET request
+ return {HTTP::Status::OK.to_i, EMPTY_HEADERS, @scanning_validator} if method == "GET"
+
+ # Check the version matches
+ if !body.starts_with?(%({"version":"3.0"))
+ logger.warn { "unknown scanning API message received:\n#{body[0..96]}" }
+ return SUCCESS_RESPONSE
+ end
+
+ locations_updated = 0
+
+ # Parse the data posted
+ begin
+ seen = DevicesSeen.from_json(body)
+ logger.debug { "parsed meraki payload" }
+
+ # We're only interested in Wifi at the moment
+ if seen.message_type != "WiFi"
+ logger.debug { "ignoring message type: #{seen.message_type}" }
+ return SUCCESS_RESPONSE
+ end
+
+ # Check the secret matches
+ raise "secret mismatch, sent: #{seen.secret}" unless seen.secret == @scanning_secret
+
+ # Extract coordinate data against the MAC address and save IP address mappings
+ # we no longer reject empty observations as we'll use the nearest wap to estimate the location
+ observations = seen.data.observations # .reject(&.locations.empty?)
+
+ ignore_older = @max_location_age.ago.in Time::Location::UTC
+ drift_older = @drift_location_age.ago.in Time::Location::UTC
+ current_time = Time.utc
+ current_time_unix = current_time.to_unix
+
+ observations.each do |observation|
+ client_mac = format_mac(observation.client_mac)
+ existing = @locations[client_mac]?
+
+ logger.debug { "parsing new observation for #{client_mac}" } if @debug_webhook
+ location = parse(existing, ignore_older, drift_older, observation)
+ if location
+ @locations[client_mac] = location
+ locations_updated += 1
+ end
+ update_ipv4(observation.ipv4, client_mac, current_time)
+ update_ipv6(observation.ipv6.try(&.downcase), client_mac, current_time)
+ end
+ rescue e
+ logger.error { "failed to parse meraki scanning API payload\n#{e.inspect_with_backtrace}" }
+ logger.debug { "failed payload body was\n#{body}" }
+ end
+
+ logger.debug { "updated #{locations_updated} locations" }
+
+ # Return a 200 response
+ SUCCESS_RESPONSE
+ end
+
+ protected def parse(existing, ignore_older, drift_older, observation) : Location?
+ locations_raw = observation.locations
+
+ # We'll attempt to return a location based on the nearest WAP
+ if locations_raw.empty?
+ last_seen = observation.latest_record
+ if wap_device = @network_devices[format_mac(last_seen.nearest_ap_mac)]?
+ return wap_device.location unless wap_device.location.nil?
+
+ if floor_plan = @floorplan_sizes[wap_device.floor_plan_id.not_nil!]?
+ return wap_device.location = Location.calculate_location(floor_plan, wap_device, last_seen.time)
+ end
+ end
+ return nil
+ end
+
+ # existing.time is our ajusted time
+ if existing_time = existing.try &.time
+ existing = nil if existing_time < ignore_older
+ end
+
+ # remove locations that don't have an x,y or very uncertain or very old
+ locations = locations_raw.reject do |loc|
+ loc.get_x.nil? || loc.variance > @maximum_uncertainty
+ end
+
+ if locations.empty?
+ logger.debug {
+ if locations_raw.empty?
+ "ignored as no location data provided"
+ else
+ "ignored as no location in observation met minimum requirements, had coordinates: #{!!locations_raw[0].get_x}, uncertainty: #{locations_raw[0].variance}"
+ end
+ } if @debug_webhook
+ return existing
+ end
+
+ # ensure oldest -> newest (we adjusted these already)
+ locations = locations.sort { |a, b| a.time <=> b.time }
+
+ # estimate the location given the current observations
+ location = existing || locations.shift
+ locations.each do |new_loc|
+ next unless new_loc.time >= location.time
+
+ # If acceptable then this is newer
+ if new_loc.variance < @acceptable_confidence
+ location = new_loc
+ next
+ end
+
+ # if more accurate and newer then we'll take this
+ if new_loc.variance < location.variance
+ location = new_loc
+ next
+ end
+
+ # should we drift the older location towards a less accurate newer location
+ if location.time < drift_older
+ # has the floor changed, we should probably accept the newer less accurate location
+ if location.floor_plan_id != new_loc.floor_plan_id
+ location = new_loc
+ next
+ end
+
+ new_uncertainty = new_loc.variance
+ old_uncertainty = location.variance
+
+ confidence_factor = 1.0 - (@confidence_multiplier * (new_uncertainty - @acceptable_confidence))
+ confidence_factor = 0.0 if confidence_factor < 0
+
+ time_diff = new_loc.time.to_unix - location.time.to_unix
+ time_factor = @time_multiplier * (time_diff - @confidence_time.to_i).to_f
+ time_factor = 0.0 if time_factor < 0
+
+ # Average of the confidence factors
+ average_multiplier = (confidence_factor + time_factor) / 2.0
+
+ new_x = new_loc.x!
+ new_y = new_loc.y!
+ old_x = location.x!
+ old_y = location.y!
+
+ # 7.5 = 5 + (( 10 - 5 ) * 0.5)
+ new_x = old_x + ((new_x - old_x) * average_multiplier)
+ new_y = old_y + ((new_y - old_y) * average_multiplier)
+ new_uncertainty = old_uncertainty + ((new_uncertainty - old_uncertainty) * average_multiplier)
+
+ new_loc.x = new_x
+ new_loc.y = new_y
+ new_loc.variance = new_uncertainty
+ location = new_loc
+ end
+ end
+
+ location
+ end
+
+ protected def update_ipv4(ipv4, client_mac, current_time)
+ return unless ipv4
+
+ lookup = @ip_lookup[ipv4]? || Lookup.new(current_time, client_mac)
+ lookup.time = current_time
+ lookup.mac = client_mac
+ @ip_lookup[ipv4] = lookup
+
+ if lookup = @ip_usernames[ipv4]?
+ username = lookup.mac
+ user_mac_mappings { |storage| map_user_mac(client_mac, username, storage) }
+ end
+ end
+
+ protected def update_ipv6(ipv6, client_mac, current_time)
+ return unless ipv6
+
+ lookup = @ip_lookup[ipv6]? || Lookup.new(current_time, client_mac)
+ lookup.time = current_time
+ lookup.mac = client_mac
+ @ip_lookup[ipv6] = lookup
+
+ if lookup = @ip_usernames[ipv6]?
+ username = lookup.mac
+ user_mac_mappings { |storage| map_user_mac(client_mac, username, storage) }
+ end
+ end
+
+ def format_mac(address : String)
+ address.gsub(/(0x|[^0-9A-Fa-f])*/, "").downcase
+ end
+
+ protected def rate_limiter
+ loop do
+ begin
+ @channel.send(nil)
+ rescue error
+ logger.error(exception: error) { "issue with rate limiter" }
+ ensure
+ sleep @wait_time
+ end
+ end
+ rescue
+ # Possible error with logging exception, restart rate limiter silently
+ spawn { rate_limiter }
+ end
+
+ # ip => {username, time}
+ @ip_usernames : Hash(String, Lookup) = {} of String => Lookup
+
+ @[Security(PlaceOS::Driver::Level::Administrator)]
+ def ip_username_mappings(ip_map : Array(Tuple(String, String, String, String?))) : Nil
+ now = Time.utc
+ user_mac_mappings do |storage|
+ ip_map.each do |(ip, username, domain, hostname)|
+ username = format_username(username)
+ @ip_usernames[ip] = Lookup.new(now, username)
+
+ if lookup = @ip_lookup[ip]?
+ map_user_mac(lookup.mac, username, storage)
+ end
+ end
+ end
+ end
+
+ @[Security(PlaceOS::Driver::Level::Administrator)]
+ def mac_address_mappings(username : String, macs : Array(String), domain : String = "")
+ username = format_username(username)
+ user_mac_mappings do |storage|
+ macs.each { |mac| map_user_mac(format_mac(mac), username, storage) }
+ end
+ end
+end
diff --git a/drivers/cisco/meraki/dashboard_spec.cr b/drivers/cisco/meraki/dashboard_spec.cr
new file mode 100644
index 00000000000..289faa22dfa
--- /dev/null
+++ b/drivers/cisco/meraki/dashboard_spec.cr
@@ -0,0 +1,107 @@
+require "./scanning_api"
+
+DriverSpecs.mock_driver "Cisco::Meraki::Dashboard" do
+ # The dashboard should request the floorplan sizes on load
+ expect_http_request do |request, response|
+ headers = request.headers
+ if headers["X-Cisco-Meraki-API-Key"]? == "configure for the dashboard API"
+ response.status_code = 200
+ response << %([{"floorPlanId":"floor-123","name":"Level 1","width":30.5,"height":20,"topLeftCorner":{"lat":0,"lng":0},"bottomLeftCorner":{"lat":0,"lng":0},"bottomRightCorner":{"lat":0,"lng":0}}])
+ else
+ response.status_code = 401
+ end
+ end
+
+ # The dashboard should also request the WAP locations
+ expect_http_request do |request, response|
+ headers = request.headers
+ if headers["X-Cisco-Meraki-API-Key"]? == "configure for the dashboard API"
+ response.status_code = 200
+ response << %([])
+ else
+ response.status_code = 401
+ end
+ end
+
+ # Send the request
+ retval = exec(:fetch, "/api/v0/organizations")
+
+ # The dashboard should send a HTTP request with the API key
+ expect_http_request do |request, response|
+ headers = request.headers
+ if headers["X-Cisco-Meraki-API-Key"]? == "configure for the dashboard API"
+ response.status_code = 202
+ response << %([{"id":"org id","name":"place tech"}])
+ else
+ response.status_code = 401
+ end
+ end
+
+ # Should return the payload
+ retval.get.should eq %([{"id":"org id","name":"place tech"}])
+
+ # Should standardise the format of MAC addresses
+ exec(:format_mac, "0x12:34:A6-789B").get.should eq %(1234a6789b)
+
+ floors_raw = %({"g_727894289773756676": {
+ "floorPlanId": "g_727894289773756676",
+ "width": 84.73653902424,
+ "height": 55.321510873304,
+ "topLeftCorner": {
+ "lat": 25.20105494120424,
+ "lng": 55.27527794417147
+ },
+ "bottomLeftCorner": {
+ "lat": 25.20128402691947,
+ "lng": 55.27478983574903
+ },
+ "bottomRightCorner": {
+ "lat": 25.200607564298647,
+ "lng": 55.27440203743774
+ },
+ "name": "BUILDING - L3"
+ },
+ "g_727894289773756679": {
+ "floorPlanId": "g_727894289773756679",
+ "width": 82.037895885132,
+ "height": 48.035263155936,
+ "topLeftCorner": {
+ "lat": 25.201070920997147,
+ "lng": 55.27523029269689
+ },
+ "bottomLeftCorner": {
+ "lat": 25.20126383588677,
+ "lng": 55.274803104166594
+ },
+ "bottomRightCorner": {
+ "lat": 25.200603702563107,
+ "lng": 55.27443896882145
+ },
+ "name": "Building - GF"
+ }})
+ floors = Hash(String, Cisco::Meraki::FloorPlan).from_json(floors_raw)
+
+ macs_raw = %({"683a1e545b0c": {
+ "floorPlanId": "g_727894289773756676",
+ "lat": 25.2011012305148,
+ "lng": 55.2749184519053,
+ "mac": "68:3a:1e:54:5b:0c",
+ "name": "1F-07"
+ },
+ "683a1e5474ed": {
+ "floorPlanId": "g_727894289773756679",
+ "lat": 25.2008175846893,
+ "lng": 55.2746475487948,
+ "mac": "68:3a:1e:54:74:ed",
+ "name": "GF-29"
+ }})
+ macs = Hash(String, Cisco::Meraki::NetworkDevice).from_json(macs_raw)
+
+ macs.each do |mac, wap_device|
+ floor_plan = floors[wap_device.floor_plan_id]
+ # do some unit testing
+ loc = Cisco::Meraki::Location.calculate_location(floor_plan, wap_device, Time.utc)
+ pp! loc
+ loc.to_json
+ end
+end
diff --git a/drivers/cisco/meraki/geo.cr b/drivers/cisco/meraki/geo.cr
new file mode 100644
index 00000000000..cf2c419721c
--- /dev/null
+++ b/drivers/cisco/meraki/geo.cr
@@ -0,0 +1,73 @@
+require "math"
+require "json"
+
+module Cisco; end
+
+module Cisco::Meraki; end
+
+module Cisco::Meraki::Geo
+ struct Point
+ include JSON::Serializable
+
+ def initialize(@lat, @lng)
+ end
+
+ property lat : Float64
+ property lng : Float64
+ end
+
+ struct Distance
+ include JSON::Serializable
+
+ def initialize(@x, @y)
+ end
+
+ property x : Float64
+ property y : Float64
+ end
+
+ def self.calculate_xy(top_left : Point, bottom_left : Point, bottom_right : Point, position, distance : Distance)
+ y_base = geo_distance(top_left, bottom_left)
+ a = geo_distance(top_left, position)
+ c = geo_distance(bottom_left, position)
+ x_raw = triangle_height(a, y_base, c)
+
+ x_base = geo_distance(bottom_left, bottom_right)
+ a = geo_distance(bottom_left, position)
+ c = geo_distance(bottom_right, position)
+ y_raw = triangle_height(a, x_base, c)
+
+ # find the percentage distance from the origin
+ percentage_height = y_raw / y_base
+ percentage_width = x_raw / x_base
+
+ # adjust into range provided by the original distances
+ Distance.new(distance.x * percentage_width, distance.y * percentage_height)
+ end
+
+ # radius in meters, approx as we're using a perfect sphere the same volume as the earth
+ EarthRadiusApprox = 6371000.7900_f64
+ Radians = Math::PI / 180_f64
+
+ # https://www.movable-type.co.uk/scripts/latlong.html
+ # returns the distance in meters
+ def self.geo_distance(start : Point, ending)
+ lat_diff = (ending.lat - start.lat) * Radians
+ lng_diff = (ending.lng - start.lng) * Radians
+ start_lat_radian = start.lat * Radians
+ end_lng_radian = ending.lng * Radians
+
+ a = Math.sin(lat_diff / 2_f64) * Math.sin(lat_diff / 2_f64) +
+ Math.cos(start_lat_radian) * Math.cos(end_lng_radian) *
+ Math.sin(lng_diff / 2_f64) * Math.sin(lng_diff / 2_f64)
+
+ c = 2_f64 * Math.atan2(Math.sqrt(a), Math.sqrt(1_f64 - a))
+
+ EarthRadiusApprox * c
+ end
+
+ # https://www.omnicalculator.com/math/triangle-height
+ def self.triangle_height(a : Float64, base : Float64, c : Float64)
+ 0.5_f64 * Math.sqrt((a + base + c) * (base + c - a) * (a - base + c) * (a + base - c)) / base
+ end
+end
diff --git a/drivers/cisco/meraki/scanning_api.cr b/drivers/cisco/meraki/scanning_api.cr
new file mode 100644
index 00000000000..577f044f4ab
--- /dev/null
+++ b/drivers/cisco/meraki/scanning_api.cr
@@ -0,0 +1,219 @@
+module Cisco; end
+
+require "json"
+require "./geo"
+
+module Cisco::Meraki
+ ISO8601 = "%FT%T%z"
+
+ class FloorPlan
+ include JSON::Serializable
+
+ @[JSON::Field(key: "floorPlanId")]
+ property id : String
+ property width : Float64
+ property height : Float64
+
+ @[JSON::Field(key: "topLeftCorner")]
+ property top_left : Geo::Point
+
+ @[JSON::Field(key: "bottomLeftCorner")]
+ property bottom_left : Geo::Point
+
+ @[JSON::Field(key: "bottomRightCorner")]
+ property bottom_right : Geo::Point
+
+ # This is useful for when we have to map meraki IDs to our zones
+ property name : String?
+
+ def to_distance
+ Geo::Distance.new(width, height)
+ end
+ end
+
+ class NetworkDevice
+ include JSON::Serializable
+
+ # Used for caching the location calculated for this device
+ # where an observation doesn't have location values but has a closest WAP
+ @[JSON::Field(ignore: true)]
+ property location : Location?
+
+ @[JSON::Field(key: "floorPlanId")]
+ property floor_plan_id : String?
+
+ property lat : Float64
+ property lng : Float64
+ property mac : String
+
+ # This is useful for when we have to map meraki IDs to our zones
+ property name : String?
+ end
+
+ class Client
+ include JSON::Serializable
+
+ property id : String
+ property mac : String
+ property description : String?
+
+ property ip : String?
+ property ip6 : String?
+
+ @[JSON::Field(key: "ip6Local")]
+ property ip6_local : String?
+
+ property user : String?
+
+ # 2020-09-29T07:53:08Z
+ @[JSON::Field(key: "firstSeen")]
+ property first_seen : String
+
+ @[JSON::Field(key: "lastSeen")]
+ property last_seen : String
+
+ property manufacturer : String?
+ property os : String?
+
+ @[JSON::Field(key: "recentDeviceMac")]
+ property recent_device_mac : String?
+ property ssid : String?
+ property vlan : Int32?
+ property switchport : String?
+ property status : String
+ property notes : String?
+
+ @[JSON::Field(ignore: true)]
+ property! time_added : Time
+ end
+
+ class RSSI
+ include JSON::Serializable
+
+ @[JSON::Field(key: "apMac")]
+ property access_point_mac : String
+ property rssi : Int32
+ end
+
+ class Location
+ include JSON::Serializable
+
+ def initialize(@x, @y, @lng, @lat, @variance, @floor_plan_id, @floor_plan_name, @time)
+ @mac = nil
+ @client = nil
+ @rssi_records = [] of RSSI
+ @nearest_ap_tags = [] of String
+ end
+
+ def self.calculate_location(floor : FloorPlan, device : NetworkDevice, time : Time) : Location
+ distance = Geo.calculate_xy(floor.top_left, floor.bottom_left, floor.bottom_right, device, floor.to_distance)
+ Location.new(distance.x, distance.y, device.lng, device.lat, 25_f64, floor.id, floor.name, time)
+ end
+
+ # NOTE:: This is not part of the location response,
+ # it is here to simplify processing
+ @[JSON::Field(ignore: true)]
+ property mac : String?
+
+ # NOTE:: this is not part of the location response,
+ # it is here to speed up processing
+ @[JSON::Field(ignore: true)]
+ property client : Client? = nil
+
+ # Multiple types as the location when parsed might include javascript `"NaN"`
+ property x : Float64 | String | Nil
+ property y : Float64 | String | Nil
+ property lng : Float64?
+ property lat : Float64?
+ property variance : Float64
+
+ @[JSON::Field(key: "floorPlanId")]
+ property floor_plan_id : String?
+
+ @[JSON::Field(key: "floorPlanName")]
+ property floor_plan_name : String?
+
+ @[JSON::Field(converter: Time::Format.new(Cisco::Meraki::ISO8601))]
+ property time : Time
+
+ @[JSON::Field(key: "nearestApTags")]
+ property nearest_ap_tags : Array(String)
+
+ @[JSON::Field(key: "rssiRecords")]
+ property rssi_records : Array(RSSI)
+
+ def x!
+ get_x.not_nil!
+ end
+
+ def y!
+ get_y.not_nil!
+ end
+
+ def get_x : Float64?
+ if tmp = x
+ if tmp.is_a?(Float64)
+ tmp
+ end
+ end
+ end
+
+ def get_y : Float64?
+ if tmp = y
+ if tmp.is_a?(Float64)
+ tmp
+ end
+ end
+ end
+ end
+
+ class LatestRecord
+ include JSON::Serializable
+
+ @[JSON::Field(key: "nearestApMac")]
+ property nearest_ap_mac : String
+
+ @[JSON::Field(key: "nearestApRssi")]
+ property nearest_ap_rssi : Int32
+
+ @[JSON::Field(converter: Time::Format.new(Cisco::Meraki::ISO8601))]
+ property time : Time
+ end
+
+ class Observation
+ include JSON::Serializable
+
+ @[JSON::Field(key: "clientMac")]
+ property client_mac : String
+
+ property manufacturer : String?
+ property ipv4 : String?
+ property ipv6 : String?
+ property ssid : String?
+ property os : String?
+
+ @[JSON::Field(key: "latestRecord")]
+ property latest_record : LatestRecord
+ property locations : Array(Location)
+ end
+
+ class Data
+ include JSON::Serializable
+
+ @[JSON::Field(key: "networkId")]
+ property network_id : String
+ property observations : Array(Observation)
+ end
+
+ class DevicesSeen
+ include JSON::Serializable
+
+ property version : String
+ property secret : String
+
+ @[JSON::Field(key: "type")]
+ property message_type : String
+
+ property data : Data
+ end
+end
diff --git a/drivers/cisco/switch/snooping_catalyst.cr b/drivers/cisco/switch/snooping_catalyst.cr
new file mode 100644
index 00000000000..5ac7662860e
--- /dev/null
+++ b/drivers/cisco/switch/snooping_catalyst.cr
@@ -0,0 +1,275 @@
+module Cisco; end
+
+module Cisco::Switch; end
+
+require "set"
+
+class Cisco::Switch::SnoopingCatalyst < PlaceOS::Driver
+ # Discovery Information
+ descriptive_name "Cisco Catalyst Switch IP Snooping"
+ generic_name :Snooping
+ tcp_port 22
+
+ # Communication settings
+ # tokenize delimiter: /\n|-- /
+
+ default_settings({
+ ssh: {
+ username: :cisco,
+ password: :cisco,
+ },
+ building: "building_code",
+ ignore_macs: {
+ "Cisco Phone Dock" => "7001b5",
+ },
+ })
+
+ # Interfaces that indicate they have a device connected
+ @check_interface = ::Set(String).new
+
+ # MAC, IP, Interface
+ @snooping = [] of Tuple(String, String, String)
+
+ # interface to MAC address mappings
+ @interface_macs = {} of String => String
+ @devices = {} of String => NamedTuple(mac: String, ip: String)
+
+ @hostname : String? = nil
+ @switch_name : String? = nil
+ @ignore_macs = ::Set(String).new
+
+ def on_load
+ # "--More--" is sent without a newline
+ transport.tokenizer = Tokenizer.new("\n", "--More--")
+
+ on_update
+ end
+
+ def on_update
+ @ignore_macs = ::Set.new((setting?(Hash(String, String), :ignore_macs) || {} of String => String).values)
+
+ self[:name] = @switch_name = setting?(String, :switch_name)
+ self[:ip_address] = config.ip.not_nil!.downcase
+ self[:building] = setting?(String, :building)
+ self[:level] = setting?(String, :level)
+ self[:last_successful_query] ||= 0
+ end
+
+ def connected
+ schedule.in(1.second) { query_connected_devices }
+ schedule.every(1.minute) { query_connected_devices }
+ end
+
+ def disconnected
+ schedule.clear
+ queue.clear
+ end
+
+ # Don't want the every day user using this method
+ @[Security(Level::Administrator)]
+ def run(command : String)
+ do_send command
+ end
+
+ def query_interface_status
+ do_send "show interfaces status"
+ end
+
+ def query_mac_addresses
+ @interface_macs.clear
+ do_send "show mac address-table"
+ end
+
+ def query_snooping_bindings
+ @snooping.clear
+ do_send "show ip dhcp snooping binding"
+ end
+
+ @querying_devices : Bool = false
+
+ def query_connected_devices
+ return if @querying_devices
+ @querying_devices = true
+
+ logger.debug { "Querying for connected devices" }
+
+ query_interface_status.get
+ sleep 3.seconds
+
+ query_mac_addresses.get
+ sleep 3.seconds
+
+ query_snooping_bindings.get
+ sleep 2.seconds
+
+ nil
+ ensure
+ @querying_devices = false
+ end
+
+ def received(data, task)
+ data = String.new(data)
+ logger.debug { "Switch sent: #{data}" }
+
+ # determine the hostname
+ if @hostname.nil?
+ parts = data.split(">")
+ if parts.size == 2
+ self[:hostname] = @hostname = parts[0]
+
+ # Exit early as this line is not a response
+ return task.try &.success
+ end
+ end
+
+ # Detect more data available
+ # ==> --More--
+ if data =~ /More/
+ send(" ", priority: 99, retries: 0)
+ return task.try &.success
+ end
+
+ # Interface MAC Address detection
+ # 33 e4b9.7aa5.aa7f STATIC Gi3/0/8
+ # 10 f4db.e618.10a4 DYNAMIC Te2/0/40
+ if data =~ /STATIC|DYNAMIC/
+ parts = data.split(/\s+/).reject(&.empty?)
+ mac = format(parts[1])
+ interface = normalise(parts[-1])
+
+ @interface_macs[interface] = mac if mac && interface
+
+ return :success
+ end
+
+ # Interface change detection
+ # 07-Aug-2014 17:28:26 %LINK-I-Up: gi2
+ # 07-Aug-2014 17:28:31 %STP-W-PORTSTATUS: gi2: STP status Forwarding
+ # 07-Aug-2014 17:44:43 %LINK-I-Up: gi2, aggregated (1)
+ # 07-Aug-2014 17:44:47 %STP-W-PORTSTATUS: gi2: STP status Forwarding, aggregated (1)
+ # 07-Aug-2014 17:45:24 %LINK-W-Down: gi2, aggregated (2)
+ if data =~ /%LINK/
+ interface = normalise(data.split(",")[0].split(/\s/)[-1])
+
+ if data =~ /Up:/
+ logger.debug { "Notify Up: #{interface}" }
+ @check_interface << interface
+
+ # Delay here is to give the PC some time to negotiate an IP address
+ # schedule.in(3000) { query_snooping_bindings }
+ elsif data =~ /Down:/
+ logger.debug { "Notify Down: #{interface}" }
+ # We are no longer interested in this interface
+ @check_interface.delete(interface)
+ end
+
+ self[:interfaces] = @check_interface
+
+ return task.try &.success
+ end
+
+ if data.starts_with?("Total number")
+ logger.debug { "Processing #{@snooping.size} bindings" }
+ checked = Set(String).new
+ devices = {} of String => NamedTuple(mac: String, ip: String)
+ state_changed = false
+
+ @snooping.each do |mac, ip, interface|
+ next unless @check_interface.includes?(interface)
+ next unless @interface_macs[interface]? == mac
+ next if checked.includes?(interface)
+
+ checked << interface
+ iface = @devices[interface]? || {mac: "", ip: ""}
+
+ if iface[:ip] != ip || iface[:mac] != mac
+ logger.debug { "New connection on #{interface} with #{ip}: #{mac}" }
+ devices[interface] = {mac: mac, ip: ip}
+ state_changed = true
+ else
+ devices[interface] = iface
+ end
+ end
+
+ # did an interface change state
+ if state_changed
+ @devices = devices
+ self[:devices] = devices
+ end
+
+ # As a link up or down might have modified this list
+ if @check_interface != checked
+ @check_interface = checked
+ self[:interfaces] = checked
+ end
+
+ self[:last_successful_query] = Time.utc.to_unix
+
+ return task.try &.success
+ end
+
+ # Grab the parts of the response
+ entries = data.split(/\s+/).reject(&.empty?)
+
+ # show interfaces status
+ # Port Name Status Vlan Duplex Speed Type
+ # Gi1/1 notconnect 1 auto auto No Gbic
+ # Fa6/1 connected 1 a-full a-100 10/100BaseTX
+ if entries.includes?("connected")
+ interface = entries[0].downcase
+ return task.try &.success if @check_interface.includes? interface
+
+ logger.debug { "Interface Up: #{interface}" }
+ @check_interface << interface
+
+ return task.try &.success
+ elsif entries.includes?("notconnect")
+ interface = entries[0].downcase
+ return task.try &.success unless @check_interface.includes? interface
+
+ # Delete the lookup records
+ logger.debug { "Interface Down: #{interface}" }
+ @check_interface.delete(interface)
+
+ return task.try &.success
+ end
+
+ # We are looking for MAC to IP address mappings
+ # =============================================
+ # MacAddress IpAddress Lease(sec) Type VLAN Interface
+ # ------------------ --------------- ---------- ------------- ---- --------------------
+ # 00:21:CC:D5:33:F4 10.151.130.1 16283 dhcp-snooping 113 GigabitEthernet3/0/43
+ # Total number of bindings: 3
+ if entries.size > 2
+ interface = normalise(entries[-1])
+
+ # We only want entries that are currently active
+ if @check_interface.includes? interface
+ # Ensure the data is valid
+ mac = entries[0]
+ if mac =~ /^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$/
+ mac = format(mac)
+ ip = entries[1]
+
+ @snooping << {mac, ip, interface} unless @ignore_macs.includes?(mac[0..5])
+ end
+ end
+ end
+
+ task.try &.success
+ end
+
+ protected def do_send(cmd, **options)
+ logger.debug { "requesting: #{cmd}" }
+ send("#{cmd}\n", **options)
+ end
+
+ protected def format(mac)
+ mac.gsub(/(0x|[^0-9A-Fa-f])*/, "").downcase
+ end
+
+ protected def normalise(interface)
+ # Port-channel == po
+ interface.downcase.gsub("tengigabitethernet", "te").gsub("twogigabitethernet", "tw").gsub("gigabitethernet", "gi").gsub("fastethernet", "fa")
+ end
+end
diff --git a/drivers/cisco/switch/snooping_catalyst_spec.cr b/drivers/cisco/switch/snooping_catalyst_spec.cr
new file mode 100644
index 00000000000..2cb082d8412
--- /dev/null
+++ b/drivers/cisco/switch/snooping_catalyst_spec.cr
@@ -0,0 +1,61 @@
+DriverSpecs.mock_driver "Cisco::Switch::SnoopingCatalyst" do
+ transmit "SG-MARWFA61301>"
+ sleep 1.5.seconds
+
+ should_send "show interfaces status\n"
+ transmit "show interfaces status\n"
+ status[:hostname].should eq("SG-MARWFA61301")
+
+ transmit %(Port Name Status Vlan Duplex Speed Type
+Gi1/0/1 notconnect 113 auto auto 10/100/1000BaseTX
+Gi1/0/2 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/11 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/12 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/13 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/14 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/15 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/16 notconnect 113 auto auto 10/100/1000BaseTX
+Gi2/0/17 notconnect 113 auto auto 10/100/1000BaseTX
+Gi3/0/8 connected 33 auto auto 10/100/1000BaseTX
+ --More--)
+
+ should_send " "
+ transmit %(
+Gi4/0/48 notconnect 113 auto auto 10/100/1000BaseTX
+Gi4/1/1 notconnect 1 auto auto unknown
+Gi4/1/2 notconnect 1 auto auto unknown
+Te4/1/4 connected trunk full 10G SFP-10GBase-SR
+Po1 connected trunk a-full a-10G
+)
+
+ sleep 3.1.seconds
+
+ should_send "show mac address-table\n"
+ transmit "show mac address-table\n"
+
+ transmit %(Vlan MAC Type Port
+33 e4b9.7aa5.aa7f STATIC Gi3/0/8
+10 f4db.e618.10a4 DYNAMIC Te2/0/40
+)
+
+ sleep 3.1.seconds
+
+ should_send "show ip dhcp snooping binding\n"
+ transmit %(MacAddress IpAddress Lease(sec) Type VLAN Interface
+------------------ --------------- ---------- ------------- ---- --------------------
+38:C9:86:17:A2:07 192.168.1.15 19868 dhcp-snooping 113 tenGigabitEthernet4/1/4
+E4:B9:7A:A5:AA:7F 10.151.128.150 16532 dhcp-snooping 33 GigabitEthernet3/0/8
+00:21:CC:D5:33:F4 10.151.130.1 16283 dhcp-snooping 113 GigabitEthernet3/0/34
+Total number of bindings: 3
+
+)
+
+ status["devices"].should eq({
+ "gi3/0/8" => {
+ "mac" => "e4b97aa5aa7f",
+ "ip" => "10.151.128.150",
+ },
+ })
+
+ status["interfaces"].should eq(["gi3/0/8"])
+end
diff --git a/drivers/denon/amplifier/av_receiver.cr b/drivers/denon/amplifier/av_receiver.cr
new file mode 100644
index 00000000000..1088019b1bc
--- /dev/null
+++ b/drivers/denon/amplifier/av_receiver.cr
@@ -0,0 +1,213 @@
+require "digest/md5"
+require "placeos-driver/interface/muteable"
+require "placeos-driver/interface/powerable"
+require "placeos-driver/interface/switchable"
+
+#
+module Denon; end
+
+module Denon::Amplifier; end
+
+# Protocol: https://aca.im/driver_docs/Denon/Denon%20AVR%20PROTOCOL%20V7.5.0.pdf
+#
+# NOTE:: Denon doesn't respond to commands that request the current state
+# (ie if the volume is 100 and you request 100 it will not respond)
+#
+
+class Denon::Amplifier::AvReceiver < PlaceOS::Driver
+ include PlaceOS::Driver::Interface::Powerable
+ include PlaceOS::Driver::Utilities::Transcoder
+
+ @channel : Channel(String) = Channel(String).new
+ @stable_power : Bool = true
+
+ COMMANDS = {
+ power: :PW,
+ power_query: :PW?,
+ mute: :MU,
+ mute_query: :MU?,
+ volume: :MV,
+ volume_query: :MV?,
+ input: :SI,
+ input_query: :SI?,
+ }
+ COMMANDS.to_h.merge!(COMMANDS.to_h.invert)
+
+ @volume_range = 0..196
+
+ default_settings({
+ max_waits: 10,
+ timeout: 3000,
+ })
+ # Discovery Information
+ tcp_port 23 # Telnet
+ descriptive_name "Denon AVR (Switcher Amplifier)"
+ generic_name :Switcher
+
+ # Denon requires some breathing room
+ # delay between_sends: 30
+ # delay on_receive: 30
+
+ def on_load
+ # transport.tokenizer = Tokenizer.new(Bytes[0x0D])
+ transport.tokenizer = Tokenizer.new("\r")
+ self[:volume_min] = 0
+ self[:volume_max] = @volume_range.max # == 98 * 2 - Times by 2 so we can account for the half steps
+ on_update
+ end
+
+ def on_update
+ self[:max_waits] = 10
+ self[:timeout] = 3000
+ end
+
+ def connected
+ #
+ # Get state
+ #
+ # power?
+ # input?
+ # mute?
+
+ schedule.every(60.seconds) do
+ logger.info { "-- Polling Denon AVR" }
+ power?
+ do_send(:input, priority: 0, name: :input)
+ end
+ end
+
+ def disconnected
+ schedule.clear
+ end
+
+ def power(state : Bool = false)
+ # self[:power] is current as we would be informed otherwise
+ if state && (self[:power] == "OFF" || self[:power] == "STANDBY") # Request to power on if off
+ do_send(:power, "ON", delay: 3.milliseconds, name: :power) # Manual states delay for 1 second, just to be safe
+ elsif !state && self[:power] == "ON" # Request to power off if on
+ do_send(:power, "STANDBY", delay: 3.milliseconds, name: :power)
+ end
+ end
+
+ def power?
+ # def power?(**options)
+ # options[:emit] = {:power => block} unless block.nil?
+ do_send(:power_query, priority: 0, name: :power_query)
+ end
+
+ def mute?
+ self[:mute] = "OFF"
+ do_send(:mute_query, priority: 0, name: :mute_query)
+ end
+
+ def mute(state : Bool = true)
+ req = state ? "ON" : "OFF"
+ return if self[:mute] == req
+ do_send(:mute, req, name: :mute)
+ end
+
+ def mute_audio(state : Bool = true)
+ mute state
+ end
+
+ def unmute
+ mute false
+ end
+
+ def unmute_audio
+ unmute
+ end
+
+ def volume(level : Int32 = 0)
+ value = 0
+ value = level if @volume_range.includes?(level.to_i)
+
+ return if self[:volume] == value
+
+ # The denon is weird 99 is volume off,
+ # 99.5 is the minimum volume,
+ # 0 is the next lowest volume and 985 is the loudest volume
+ # => So we are treating 99, 995 and 0 as 0
+ step = value % 2
+ actual = value / 2
+ req = actual.to_s.rjust(2, '0')
+ req += "5" if step != 0
+
+ do_send(:volume, req, name: :volume) # Name prevents needless queuing of commands
+
+ end
+
+ def volume?
+ do_send(:volume_query, priority: 0, name: :volume_query)
+ end
+
+ # Just here for documentation (there are many more)
+ #
+ # INPUTS = [:cd, :tuner, :dvd, :bd, :tv, :"sat/cbl", :dvr, :game, :game2, :"v.aux", :dock]
+ def input(input : String = "")
+ status = input.upcase # .downcase.to_sym
+ if status != self[:input]
+ input = input.to_s.upcase
+ do_send(:input, input, name: :input)
+ end
+ end
+
+ def input?
+ do_send(:input_query, priority: 0, name: :input_query)
+ end
+
+ def received(data, task)
+ data = String.new(data)
+ logger.info { "Denon sent #{data.inspect}" }
+
+ return unless task
+
+ # Process the response
+ cmd = data[0..1] # first 2 chars are the key / command
+ val = data[2..-2] # anything following the above and before \r is a response value
+
+ case cmd
+ when "PW"
+ self[:power] = val
+ when "SI"
+ self[:input] = val
+ when "MV"
+ # return :ignore if val.chars.size > 3 # May send 'MVMAX 98' after volume command
+ # self[:volume] = 0
+ # vol = val.to_i32
+ # self[:volume] = val unless val.to_i32 > @volume_range.max
+ self[:volume] = val
+ # return :ignore if param.length > 3 # May send 'MVMAX 98' after volume command
+ # vol = param[0..1].to_i * 2
+ # vol += 1 if param.length == 3
+ # vol == 0 if vol > @volume_range.max # this means the volume was 99 or 995
+ # self[:volume] = vol
+
+ when "MU"
+ self[:mute] = val
+ else
+ return :ignore
+ end
+ return task.try &.success
+ end
+
+ protected def do_send(command, param = nil, **options)
+ # prepare the command
+ cmd = if param.nil?
+ "#{COMMANDS[command]}"
+ else
+ "#{COMMANDS[command]}#{param}"
+ end
+ logger.info { "Queing: #{cmd}" }
+
+ # queue the request
+ queue(**({
+ name: command,
+ }.merge(options))) do
+ @channel = Channel(String).new
+ # send the request
+ logger.info { " Sending: #{cmd}" }
+ transport.send(cmd)
+ end
+ end
+end
diff --git a/drivers/denon/amplifier/av_receiver_spec.cr b/drivers/denon/amplifier/av_receiver_spec.cr
new file mode 100644
index 00000000000..f844a6d7385
--- /dev/null
+++ b/drivers/denon/amplifier/av_receiver_spec.cr
@@ -0,0 +1,71 @@
+DriverSpecs.mock_driver "Denon::Amplifier::AvReceiver" do
+ ####
+ # POWER
+ #
+ sleep 1.second
+ # query power
+ exec(:power?)
+ should_send("PW?")
+ responds("PWOFF\r")
+ status[:power].should eq("OFF")
+ # turn power on
+ exec(:power, true)
+ should_send("PWON")
+ responds("PWON\r")
+ status[:power].should eq("ON")
+ # power off turns amp to STANDBY not actually OFF
+ exec(:power, false)
+ should_send("PWSTANDBY")
+ responds("PWSTANDBY\r")
+ status[:power].should eq("STANDBY")
+
+ ####
+ # INPUT
+ #
+ sleep 1.second
+ # query input > DVD
+ exec(:input?)
+ should_send("SI?")
+ responds("SIDVD\r")
+ status[:input].should eq("DVD")
+ # chaange input to tuner
+ exec(:input, "TUNER")
+ should_send("SITUNER")
+ responds("SITUNER\r")
+ status[:input].should eq("TUNER")
+
+ ####
+ # VOLUME
+ #
+ sleep 1.second
+ # query
+ exec(:volume?)
+ should_send("MV?")
+ responds("MV80\r")
+ status[:volume].should eq("80")
+ # change volume
+ exec(:volume, 78)
+ should_send("MV39.0")
+ responds("MV39.0\r")
+ status[:volume].should eq("39.0")
+
+ ####
+ # MUTE
+ #
+ sleep 1.second
+ # query
+ exec(:mute?)
+ should_send("MU?")
+ responds("MUOFF\r")
+ status[:mute].should eq("OFF")
+ # mute on
+ exec(:mute, true)
+ should_send("MUON")
+ responds("MUON\r")
+ status[:mute].should eq("ON")
+ # mute off
+ exec(:mute, false)
+ should_send("MUOFF")
+ responds("MUOFF\r")
+ status[:mute].should eq("OFF")
+end
diff --git a/drivers/epson/projector/esc_vp21.cr b/drivers/epson/projector/esc_vp21.cr
new file mode 100644
index 00000000000..0555e875f0b
--- /dev/null
+++ b/drivers/epson/projector/esc_vp21.cr
@@ -0,0 +1,224 @@
+require "placeos-driver/interface/muteable"
+require "placeos-driver/interface/powerable"
+require "placeos-driver/interface/switchable"
+
+class Epson::Projector::EscVp21 < PlaceOS::Driver
+ include Interface::Powerable
+ include Interface::Muteable
+
+ enum Input
+ HDMI = 0x30
+ HDBaseT = 0x80
+ end
+
+ include Interface::InputSelection(Input)
+
+ # Discovery Information
+ tcp_port 3629
+ descriptive_name "Epson Projector"
+ generic_name :Display
+
+ @power_target : Bool? = nil
+ @unmute_volume : Int32? = nil
+
+ def on_load
+ transport.tokenizer = Tokenizer.new("\r")
+ self[:type] = :projector
+ end
+
+ def connected
+ # Have to init comms
+ send("ESC/VP.net\x10\x03\x00\x00\x00\x00")
+ schedule.every(52.seconds, true) { do_poll }
+ end
+
+ def disconnected
+ schedule.clear
+ self[:power] = false
+ end
+
+ def power(state : Bool)
+ if state
+ @power_target = true
+ logger.debug { "-- epson Proj, requested to power on" }
+ do_send(:power, "ON", delay: 40.seconds, name: "power")
+ else
+ @power_target = false
+ logger.debug { "-- epson Proj, requested to power off" }
+ do_send(:power, "OFF", delay: 10.seconds, name: "power")
+ end
+ power?
+ end
+
+ def power?(**options) : Bool
+ do_send(:power, **options, name: :power?).get
+ !!self[:power]?.try(&.as_bool)
+ end
+
+ def switch_to(input : Input)
+ logger.debug { "-- epson Proj, requested to switch to: #{input}" }
+ do_send(:input, input.value.to_s(16), name: :input)
+
+ # for a responsive UI
+ self[:input] = input # for a responsive UI
+ self[:video_mute] = false
+ input?
+ end
+
+ def input?
+ do_send(:input, name: :input_query, priority: 0).get
+ self[:input]
+ end
+
+ # Volume commands are sent using the inpt command
+ def volume(vol : Int32, **options)
+ vol = vol.clamp(0, 255)
+ @unmute_volume = self[:volume].as_i if (mute = vol == 0) && self[:volume]?
+ do_send(:volume, vol, **options, name: :volume)
+
+ # for a responsive UI
+ self[:volume] = vol
+ self[:audio_mute] = mute
+ volume?
+ end
+
+ def volume?
+ do_send(:volume, name: :volume?, priority: 0).get
+ self[:volume]?.try(&.as_i)
+ end
+
+ def mute(
+ state : Bool = true,
+ index : Int32 | String = 0,
+ layer : MuteLayer = MuteLayer::AudioVideo
+ )
+ case layer
+ when .audio_video?
+ do_send(:av_mute, state ? "ON" : "OFF", name: :mute)
+ do_send(:av_mute, name: :mute?, priority: 0)
+ when .video?
+ do_send(:video_mute, state ? "ON" : "OFF", name: :video_mute)
+ video_mute?
+ when .audio?
+ val = state ? 0 : @unmute_volume.not_nil!
+ volume(val)
+ end
+ end
+
+ def video_mute?
+ do_send(:video_mute, name: :video_mute?, priority: 0).get
+ !!self[:video_mute]?.try(&.as_bool)
+ end
+
+ ERRORS = [
+ "00: no error",
+ "01: fan error",
+ "03: lamp failure at power on",
+ "04: high internal temperature",
+ "06: lamp error",
+ "07: lamp cover door open",
+ "08: cinema filter error",
+ "09: capacitor is disconnected",
+ "0A: auto iris error",
+ "0B: subsystem error",
+ "0C: low air flow error",
+ "0D: air flow sensor error",
+ "0E: ballast power supply error",
+ "0F: shutter error",
+ "10: peltiert cooling error",
+ "11: pump cooling error",
+ "12: static iris error",
+ "13: power supply unit error",
+ "14: exhaust shutter error",
+ "15: obstacle detection error",
+ "16: IF board discernment error",
+ ]
+
+ def inspect_error
+ do_send(:error, priority: 0)
+ end
+
+ COMMAND = {
+ power: "PWR",
+ input: "SOURCE",
+ volume: "VOL",
+ av_mute: "MUTE",
+ video_mute: "MSEL",
+ error: "ERR",
+ lamp: "LAMP",
+ }
+ RESPONSE = COMMAND.to_h.invert
+
+ def received(data, task)
+ return task.try(&.success) if data.size <= 2
+ data = String.new(data[1..-2])
+ logger.debug { "epson Proj sent: #{data}" }
+
+ data = data.split('=')
+ case RESPONSE[data[0]]
+ when :error
+ if data[1]?
+ code = data[1].to_i(16)
+ self[:last_error] = ERRORS[code]? || "#{data[1]}: unknown error code #{code}"
+ return task.try(&.success("Epson PJ error was #{self[:last_error]}"))
+ else # Lookup error!
+ return task.try(&.abort("Epson PJ sent error response for #{task.not_nil!.name || "unknown"}"))
+ end
+ when :power
+ state = data[1].to_i
+ self[:power] = state < 3
+ self[:warming] = state == 2
+ self[:cooling] = state == 3
+
+ if self[:warming].as_bool || self[:cooling].as_bool
+ schedule.in(5.seconds) { power?(priority: 0) }
+ end
+
+ if (power_target = @power_target) && self[:power] == power_target
+ @power_target = nil
+ self[:video_mute] = false unless self[:power].as_bool
+ end
+ when :av_mute
+ self[:video_mute] = self[:audio_mute] = data[1] == "ON"
+ self[:volume] = 0
+ when :video_mute
+ self[:video_mute] = data[1] == "ON"
+ when :volume
+ vol = data[1].to_i
+ self[:volume] = vol
+ mute = vol == 0
+ self[:audio_mute] = mute if mute
+ @unmute_volume ||= vol unless mute
+ when :lamp
+ self[:lamp_usage] = data[1].to_i
+ when :input
+ self[:input] = Input.from_value(data[1].to_i(16)) || "unknown"
+ end
+
+ task.try(&.success)
+ end
+
+ def do_poll
+ if power?(priority: 0)
+ if power_target = @power_target
+ if self[:power]? != power_target
+ power(power_target)
+ else
+ @power_target = nil
+ end
+ else
+ input?
+ video_mute?
+ volume?
+ end
+ end
+ do_send(:lamp, priority: 0)
+ end
+
+ private def do_send(command, param = nil, **options)
+ command = COMMAND[command]
+ cmd = param ? "#{command} #{param}\r" : "#{command}?\r"
+ logger.debug { "Epson proj sending #{command}: #{cmd}" }
+ send(cmd, **options)
+ end
+end
diff --git a/drivers/epson/projector/esc_vp21_spec.cr b/drivers/epson/projector/esc_vp21_spec.cr
new file mode 100644
index 00000000000..a37f904101b
--- /dev/null
+++ b/drivers/epson/projector/esc_vp21_spec.cr
@@ -0,0 +1,59 @@
+DriverSpecs.mock_driver "Epson::Projector::EscVp21" do
+ # connected
+ should_send("ESC/VP.net\x10\x03\x00\x00\x00\x00")
+ responds(":\r")
+ # do_poll
+ # power?
+ should_send("PWR?\r")
+ responds(":PWR=01\r")
+ status[:power].should eq(true)
+ # input?
+ should_send("SOURCE?\r")
+ responds(":SOURCE=30\r")
+ status[:input].should eq("HDMI")
+ # video_mute?
+ should_send("MSEL?\r")
+ responds(":MSEL=0\r")
+ status[:video_mute].should eq(false)
+ # volume?
+ should_send("VOL?\r")
+ responds(":VOL=10\r")
+ status[:volume].should eq(10)
+ # lamp
+ should_send("LAMP?\r")
+ responds(":LAMP=20\r")
+ status[:lamp_usage].should eq(20)
+
+ exec(:mute)
+ should_send("MUTE ON\r")
+ responds(":\r")
+ should_send("MUTE?\r")
+ responds(":MUTE=ON\r")
+ status[:video_mute].should eq(true)
+ status[:audio_mute].should eq(true)
+ status[:volume].should eq(0)
+
+ exec(:switch_to, "HDBaseT")
+ should_send("SOURCE 80\r")
+ responds(":\r")
+ should_send("SOURCE?\r")
+ responds(":SOURCE=80\r")
+ status[:input].should eq("HDBaseT")
+ status[:video_mute].should eq(false)
+
+ exec(:mute_audio, false)
+ should_send("VOL 10\r")
+ responds(":\r")
+ should_send("VOL?\r")
+ responds(":VOL=10\r")
+ status[:volume].should eq(10)
+ status[:audio_mute].should eq(false)
+
+ exec(:volume, 50)
+ should_send("VOL 50\r")
+ responds(":\r")
+ should_send("VOL?\r")
+ responds(":VOL=50\r")
+ status[:volume].should eq(50)
+ status[:audio_mute].should eq(false)
+end
diff --git a/drivers/exterity/avedia_player/r92xx.cr b/drivers/exterity/avedia_player/r92xx.cr
new file mode 100644
index 00000000000..1d1bda86d06
--- /dev/null
+++ b/drivers/exterity/avedia_player/r92xx.cr
@@ -0,0 +1,170 @@
+require "telnet"
+
+module Exterity; end
+
+module Exterity::AvediaPlayer; end
+
+class Exterity::AvediaPlayer::R92xx < PlaceOS::Driver
+ descriptive_name "Exterity Avedia Player (R92xx)"
+ generic_name :IPTV
+ tcp_port 23
+
+ default_settings({
+ max_waits: 100,
+ username: "admin",
+ password: "labrador",
+ })
+
+ @ready : Bool = false
+ @telnet : Telnet? = nil
+
+ def on_load
+ new_telnet_client
+ transport.pre_processor { |bytes| @telnet.try &.buffer(bytes) }
+ end
+
+ def connected
+ @ready = false
+ self[:ready] = false
+
+ schedule.every(60.seconds) do
+ logger.info { "-- Polling Exterity Player" }
+ tv_info
+ end
+ end
+
+ def disconnected
+ # ensures the buffer is cleared
+ new_telnet_client
+
+ schedule.clear
+ end
+
+ def channel(number : Int32 | String)
+ if number.is_a? Number
+ set :playChannelNumber, number
+ else
+ stream number
+ end
+ end
+
+ def stream(uri : String)
+ set :playChannelUri, uri
+ end
+
+ def dump
+ do_send "^dump!", name: :dump
+ end
+
+ def help
+ do_send "^help!", name: :help
+ end
+
+ def reboot
+ remote :reboot
+ end
+
+ def tv_info
+ get :tv_info
+ end
+
+ def version
+ get :SoftwareVersion
+ end
+
+ def manual(cmd : String)
+ do_send cmd
+ end
+
+ def received(data, task)
+ data = String.new(data).strip
+
+ logger.info { "Exterity sent #{data}" }
+
+ if @ready
+ # Detect if logged out of serialCommandInterface
+ if data =~ /sh: .* not found/i
+ # Launch command processor
+ do_send "/usr/bin/serialCommandInterface", wait: false, delay: 2.seconds, priority: 95
+ return :failure
+ end
+
+ # Extract response
+ data.split("!").map(&.strip("^")).each do |resp|
+ process_resp(resp, task)
+ end
+ elsif data =~ /Exterity Control Interface| Exit/i
+ logger.info { "-- got the control interface message, we're READY now" }
+ @ready = true
+ self[:ready] = true
+ version
+ elsif data =~ /login:/i
+ logger.info { "-- got the login: prompt" }
+ transport.tokenizer = Tokenizer.new("\r")
+
+ # login
+ do_send setting(String, :username), wait: false, delay: 200.milliseconds, priority: 98
+ do_send setting(String, :password), wait: false, delay: 200.milliseconds, priority: 97
+
+ # select open shell option
+ do_send "6", wait: false, delay: 2.seconds, priority: 96
+
+ # launch command processor
+ do_send "/usr/bin/serialCommandInterface", wait: false, delay: 200.milliseconds, priority: 95
+
+ # we need to disconnect if we don't see the serialCommandInterface after a certain amount of time
+ schedule.in(20.seconds) do
+ if !@ready
+ logger.error { "Exterity connection failed to be ready after 5 seconds. Check username and password." }
+ disconnect
+ end
+ end
+ elsif logger.info { "Somehow we got here #{data}" }
+ end
+
+ task.try &.success
+ end
+
+ protected def process_resp(data, task)
+ logger.info { "Resp details #{data}" }
+
+ parts = data.split ':'
+
+ case parts[0].to_s
+ when "error"
+ if task != nil
+ logger.warn { "Error when requesting: #{task.try &.name}" }
+ else
+ logger.warn { "Error response received" }
+ end
+ when "tv_info"
+ self[:tv_info] = parts[1]
+ when "SoftwareVersion"
+ self[:version] = parts[1]
+ end
+ end
+
+ protected def new_telnet_client
+ @telnet = Telnet.new do |data|
+ transport.send(data)
+ end
+ end
+
+ protected def do_send(command, **options)
+ logger.info { "requesting #{command}" }
+ send @telnet.not_nil!.prepare(command), **options
+ end
+
+ protected def set(command, data, **options)
+ # options[:name] = :"set_#{command}" unless options[:name]
+ do_send "^set:#{command}:#{data}!", **options
+ end
+
+ protected def remote(cmd, **options)
+ do_send "^send:#{cmd}!", **options
+ end
+
+ protected def get(status, **options)
+ do_send "^get:#{status}!", **options
+ end
+end
diff --git a/drivers/exterity/avedia_player/r92xx_protocol.md b/drivers/exterity/avedia_player/r92xx_protocol.md
new file mode 100644
index 00000000000..fdad269cab1
--- /dev/null
+++ b/drivers/exterity/avedia_player/r92xx_protocol.md
@@ -0,0 +1,415 @@
+
+# Exterity AvediaPlayer r9200 Control Protocol.
+
+NOTE:: All information in this document was obtained via exploration of the R9200 device.
+No information here was provided by Exterity during this process
+
+
+## Connecting
+
+* Telnet Protocol (port 23)
+* `telnet 192.168.1.13`
+* Default username: `admin`
+* Default password: `labrador`
+* Select option `6` to run a shell
+
+
+## Shell Navigation
+
+Once in the shell you can use following tools to read files:
+
+* `less` for scanning through files
+* `cat` for dumping files
+* `ps aux` for viewing processes
+* `ls` for listing files
+
+The file system is readonly so moving files to `/usr/local/www` for downloading was not possible.
+
+
+## Applications
+
+* Application are installed at: `/usr/bin`
+ * `serialCommandInterface` allows programmatic control of the device
+ * `irsend` for sending IR commands
+* Configuration is at: `/etc`
+ * `lircd.conf` contains the human readable names of all the IR commands
+
+```
+begin remote
+
+ name exterity_remote_2
+
+ bits 16
+ flags SPACE_ENC
+ eps 20
+ aeps 200
+
+ header 8800 4400
+ one 550 1650
+ zero 550 550
+ ptrail 550
+ repeat 8800 2200
+ pre_data_bits 16
+ pre_data 0xB5B7
+ gap 38500
+ toggle_bit 0
+ frequency 38000
+
+#! exterity_bit_period 560
+#! exterity_aeps 500
+#! exterity_rmpower_len 66
+#! exterity_rmpower_pattern 16 8 1 3 1 1 1 3 1 3 1 1 1 3 1 1 1 3 1 3 1 1 1 3 1 3 1 1 1 3 1 3 1 3 1 3 1 1 1 3 1 1 1 3 1 3 1 1 1 3 1 1 1 3 1 1 1 3 1 1 1 1 1 3 1 1
+
+ begin codes
+ rm_1 0x45ba
+ rm_2 0x35ca
+ rm_3 0x6d92
+ rm_4 0xc53a
+ rm_5 0xb54a
+ rm_6 0xed12
+ rm_7 0x25da
+ rm_8 0x758a
+ rm_9 0x1de2
+ rm_cancel 0x03fc
+ rm_0 0xf50a
+ rm_menu 0xa55a
+ rm_power 0xad52
+ rm_chup 0x0df2
+ rm_chdown 0x8d72
+ rm_volup 0x5da2
+ rm_voldown 0xdd22
+ rm_up 0x4db2
+ rm_left 0x956a
+ rm_enter 0xcd32
+ rm_right 0xbd42
+ rm_down 0x2dd2
+ rm_mute 0xa35c
+ rm_red 0x837c
+ rm_green 0x43bc
+ rm_yellow 0xc33c
+ rm_blue 0x23dc
+ rm_rewind 0x15ea
+ rm_play 0x55aa
+ rm_pause 0xe51a
+ rm_ff 0x3dc2
+ rm_skipback 0x639c
+ rm_skipfwd 0xe31c
+ rm_stop 0x7d82
+ rm_record 0x659a
+ rm_exterity 0x13ec
+ rm_fn_tv 0x936c
+ rm_fn_home 0x53ac
+ rm_guide 0xd32c
+ rm_subtitle 0x857A
+ rm_info 0x33CC
+ rm_help 0xB34C
+ rm_audio 0x9D62
+ rm_teletext 0xD52A
+ rm_av 0xFD02
+ end codes
+
+end remote
+
+```
+
+
+## Serial Command Interface
+
+* All lines start with `^`
+* All lines end with `!`
+
+Dump of the help text:
+
+
+```
+^help!
+To display a value: ^get: