...(Quickstart guild is coming soon.)

What is the SlappFramework?

SlappFramework is an experiment in sql-centric custom app development. Take advantage of the rules already defined down in the database. Take advantage of the capabilities of the sql programming language, to the fullest extent possible. That is the idea in a nutshell.

Why is it called "SlappFramework"?

It stands for S-ql L-overs APP-lication Framework. In all likelihood, a custom app is going to require some custom code. If you happen to feel very productive in sql programming, and wonder why the world ever needs anything else, then this could be just the framework for you!

What is the "BulkOpsHelper"?

The BulkOpsHelper is the bottommost layer of the framework. It sits right on top of the database itself. The BulkOpsHelper is the highway into the database. Its aim is to allow only perfect data to enter production tables, with as little hand-written data-validation as possible.

Circling back to the big idea with SlappFramework - if datatypes, relationships and constraints are already defined on the database table itself, then why repeat those same rules elsewhere?

How is the framework going to keep me from writing validation logic?

It’s actually very simple – staging tables.

If the database includes staging tables that are defined as strictly (if not more strictly) than the “real” tables, then no bad data should ever enter real tables. If the framework forces all data thru staging tables first, then robust validation is unavoidable.

OK, so will the framework require me to duplicate every single table in the database, with a corresponding staging table?

Not necessarily. It depends...

A single command should be able to create a staging table, given the “real” table name as a parameter. That task should not be too difficult for the framework.

Secondly, there would obviously be no need for staging tables that are not involved in end-user data entry. Finally, having custom staging tables can be a big help in flexible uploading scenarios. You could build a custom staging table for how the upload file should look, but then “normalize” the data via sql programming when actually writing to the real tables.

Let’s say we wanted to build a bulk uploading scenario involving data for a list of users, along with all of their associated roles. Build a custom staging table that mirrors the CSV file upload format, having columns like UserName, IsARole001, IsARole002, IsARole003, etc. Require one single row per user in the upload file, along with an ‘x’ under which role(s) apply.

That sounds simpler for end-users to understand and build, rather than requiring a row for each user, for each associated role. (This assumes the common scenario of users and roles being a many-to-many relationship in the underlying database.)

That may not be the best example. The point is, if it’s possible to “abstract-away” implementation details of a normalized database from end-users, that is a good thing. As long as the system keeps things clean and normalized within, then all should be fine.

Alright well, I'm sold for now, how do I use this contraption?

Glad you asked! :)
...(Quickstart guild is coming soon.)

© 2024 Chris Hamilton. All Rights Reserved.