Could have a binary setting, Using the following example data:

  • Tree ( code, sauce)
  • Nice ( project, sauce) With the following tree:

--code |--sauce

--sauce

with one binary option ∝ we would get:

--code |--sauce Tree --sauce Nice

and in the other ⊚ we would get:

--code |--sauce Tree --sauce Nice Tree


i a m confusinon Why So theres more ways we can do this

So for a directory, present children under it that match the list of filters all down to that point For a directory, present children that match the list of filters down to that point and dont match any filters beyond it. Maybe implement this and see if it can match the current code dir

Currently all data that includes all of the path parts is displayed Maybe data is displayed when all its tags are completely met So have an all components filter Nope I think those methods are both functionally the same, I want it to display data that has the tags of the parts and nothing else

SO ONLYHASTAGS IS WORKING NICE, THERES STILL MORE TO DO ON THAT FRONT, DONT THINK ITS EXCLUSIVE. iN ADDITION TO THAT, IMPLEMENTING THAT EXCLUSIVITY DOESNT SPELL WELL FOR THIS BEING A SOLUTION THAT CAN APPLY TO ARBITRARY DIRECTORYS HAVING CUSTOM FILTERS. THE SOLUTION SHOULD BE APPLICABLE TO ANY FILTERS. CANT REALLY THINK TOO WELL RN BUT YHHHH THIS SHIT, KEEP IT UP X ^2fd21f

We can place a member variable on the directories of a filter list during setup of the mount and then connect the signals so that the invalidation structure automatically updates the children as to changes, then when reading the directories, we can just read from that filter list

So do I treat all directories as filters?


∝ could produce different results for the location of data which could be in multiple locations, there are different ways to do this so the method chosen should be consistent. If not then one cannot rely on this hierarchy for consistent location of the data but one could rely on the same tags being present on the data and always could rely on its unique id. Systems such as python may be built around a reliable hierarchy structure Also systems such as python can have autogenerated hierarchy like in gi.repository I dont know if this offers any new functionality over our hierarchical organisation

Not reliable

Always reliable

This technique is as reliable as filesystem methods
Changing the uniquely specified directory will still break the reference as always
  • First come first serve # Not reliable If the algorithm was based on some alphabetical sorting then this could change the location of data.
  • Time based # Always reliable The edit time of each hierarchical edit could be tracked and this can then be used to either state that the data will always be in the first match in time
  • Order based # Always reliable The order stated in the hierarchy definition can be used to always pick the first match, I think this is as reliable as a filesystem

How do i implement an order based exlusivity

Mounting as a directory from arbitrary hierarchy

ldd $( which mount) lists libraries used to mount devices on the filesystem, these libraries might be a good investigative starting point

So if going down the route of having people design structures using data structures that ive designed then I think I will have to design modelling tool that will

So we have the DataForTypes module

This is working well, we can specify data from the specified typedDict in the manualData dictionary or we can specify it as an attribute upon the class itself

There are some cases where data isnt associated

An example is IntFlag

The caller must be able to look at the types of data that were returned and make a decision about what to do with them

Another current example is that I want the same dict conversion to occur for multiple different type-s so for ( type, function, lambda, method) any symbol accessed with a qualified name

perhaps using current store and filtration structure and can specify data using tag-s n filter-s n shit

so in memory store

Nutrition

Do these operate upon dataIds or on version ids VariousConceptionsOfVersionedData

  • Movement to a different store ( ignoring movements to the datas current store)
  • Custom code which is run over selection
  • Updating data to match a new specification ( new version or different spec, possibly ignore conversions to different spec which would end up in blank data so no point in copying), update any to any, not just internal types can use the conversion data and spec differencing. So need to pick destination type and spec version if its versioned ^9e834f Delta change discernment and application design I think i can make ( updating data to a new spec) and ( updating data to a changing spec) the same problem
  • Addition of attributes with a specific value at a path
  • Removal of attribute at a path
  • Alteration to attribute data at a path
  • Rename of attributes
  • Addition of a value at a sequence index - takes an index which can be negative, takes a switch to determine whether the added value should be unique

op | path | versionedStatusMatters | storeOrGeneral | unVersionedRequiresMemoryLoading | versionedRequiresMemoryLoading

ops= [
 ( "attribAddition", 1, 1, s, 0, 1), # Attribute changes in the version format require knowledge of the previous value
 ( "attribRemoval", 1, 1, s, 0, 1),
 ( "attribChange", 1, 1, s, 0, 1),
 ( "attribRename", 1, 1, s, 0, 1), # If we could create and discern optional unidirectional deltas then the versioned data can be updated in store
 ( "storeMovement", 0, 0, g, 1, 1),
 ( "runCode", 0, 0, s, 1, 1),
 ( "typeTransformation<map>", 0, 0, s, 0, 0), # Would slow next load speed
 ( "typeTransformation<direct>", 0, 0, s, 1, 1),
 ( "sequenceAddition", 1, 1, s, 0, 1),
 ( "", 0, 0, s, 0, 0),
 ( "", 0, 0, s, 0, 0),
 ( "", 0, 0, s, 0, 0),
 ( "", 0, 0, s, 0, 0),
]


( "", 0, 0, s, 0, 0),

So it seems we can manipulate arrays in some stores, we are generic here The actions should be designed for manipulating the data as it is represented in memory In python there are classes, sequences and maps pretty much so attrib funcs can handle classes and maps what about non string key maps? well in order to equality check the serialised format, we would need to do funky stuff so not now sequences and sets can be handled with

well arent these all the same as the delta transform sets, however some of them wouldnt be created by the calculateDelta function such as runCode some of those created by the delta funcs may not be able to be run by the store, but thats ok, full store support isnt feasible anyway due to their varying nature runCode could vary depending on whether it was run on load or at specified time, so if this was passed to a function on the store then we would need to specify whether to update those requiring memory loading at that moment or upon next load regardless i think specification version updates should be applied with a different function an advantage of passing these as delta transforms is that we already have the in memory mechanism to apply them should they be unsupported by the store The delta transform creation, formats and discernment would have to be reworked to support unidirectional transforms So unidirectional deltas are designed here: Unidirectional delta operation design

We can use a list of unidirectional deltas as arguments to update The store then either supports them and applies them instantly or it We also need the ability to define ConstructionSet s that only provide operation descriptions and applications with no construction function this is good as it is then simple to create a construction function from that point They are applied in the order specified

So how do sequence operations work in the SequenceMatcherDeltaTransformationSet well there is currently, ( insert, delete, replace)

called If a store cant support an edit operation then it should save a record and perform the operation upon load or in the background

maybe then the ability to specify an enforced preference of the application time, with no preffecence being available


So we may want to perform actions on data s which arent in memory, they may also be in memory. So we should use ids

so are editing actions done what about creation, deletion, duplication creation isnt editing and doesnt need to match anything, w store movement could come here too

creation needs ( data, numberOfCopies, additionalConfig) is tied to store currently creation is handled in memory, if one wants more they can call a duplication function, maybe a wrapper could be provided around this

duplication, needs ( filter, copies, additional config) is tied to a store

deletion needs ( filter)

movement needs ( filter, dest store, delete old query)

these all have in common that they manipulate the prescence of data and not the contents so they could all be managed under one function that manages prescence manipulation should they have similar parameters needed

I think the current creation mechanism is decent, when implementing duplication, we could just run the duplication after creation to create multiple copies

movement can be handled by the store, unless anything special is to be done, they can just read in the data and run the ltsd instanciation func for each one in the new store

since the parameters required are different these could be different functions that are grouped together in the 1d code text space

no i will group them to make future edits to the interface easier

one thing to work out with this is what happens when there are multiple holders of the same version well multiple loadings of the same version should result in different versions in memory and should be mapped to different version ids

it is still possible for different in memory soiftware aspects to hold a reference to the same version e.g. one aspect creates a new version, another then loads all versions of a specific data id

what happens when one of the holders saves the data, well upon save a delta is constructed between the latest version and the loaded version ( In the future this could be between the loaded version and the memory with the loaded version id specified) this delta is saved to the store with the current version id a new version id is then chosen this means that the non saving holder cant rely on the version id staying the same

Someone online said it cant be done but it really doesnt seem that hard

We can create custom definitions for specific types And then those that dont have custom definitions can use isinstance

We have a checking function ⌿( obj: Any, complexType: type)-> bool: ... Examples of custom definitions:

if a tuple of specific types and length is specified, we can check the instance types in order using ⌿

If a list of multiple types is passed then we can run ⌿ on every member of the instance for each specified list type and make sure it matches one of them. same can be done for dict but with keys and values

Insert notes on human, artificial and combined developments

Python Gtk, GObject, GtkSource, PyGObject all that stuff Pymongo

Jedi is used for terminal completion fusepy, if unwanted a custom interface can be written this project is doing python funcyness and dont actually write c code

Components from exsiting notes

Digital

Sysml discussion and example

  • need for expansion of concept at later time, system model "fleshed out"
  • determining if a statement about a system ( real or proposed) is true this follows modelling a system and analysing it, no production planning is needed perhaps achieved using generic analysis tooling but would only know during further design of each system
  • Modelling of real world material organisation, without simulating the material~s structure and rolling dynamic-s, if it were nescessary that the discerning body knows of the ( conjoined material)~s ability to move then it would be nescessary to be able to model the ability for conjoined material to move, perhaps this could be fleshed out to provide simulation if nescessary ( unlikely case but would be relevant when generalised to other cases). conjoined-ness of material is more complex than a grouping of item-s, reality is more complex than discrete modelling of item-s, perhaps swapping out system modelling methods, example: ( complex material joining-> grouped discrete item-s conjoined model)
  • Possible modelling of reality as the whole of reality without denoting seperate contributors, wouldnt want negative computation impact-s Splitting on detail, lazy load So a consice way to model an entirity of reality or proposed state without unnescessary concepts like needing to model in chunk-s of space Start modelling anywhere and interact it with anywhere else Calls in to question how differing planes are modelled and how discrete boundries may be broken down Planned work
  • Custom language format vs generic interface in a general language

2022-03-12

  • The idea of a plan of action including manual task designation and automation This can adapted by the computer system or by a user Could be multiple Disambiguation needed of word "plan" scoping of tasks that should be taken and automation that will occur Multiple could exist fulfilling some set of desire-s
  • Some aspect-s are dependant on data that is known to be required at the time of modelling but is not available at the time of modelling, in order make certain analysis-s certain data must be obtained example-s for rain water input into a system one can use weather prediction-s which can be obtained ~144 hours before the predicted moment temperature similarly due to weather system dynamics
  • Usage of using primarly single scalar measurements to operate is questioned
  • Assuming usage of discrete model-s as with ( prod proc)-s and a choice of ( optimisation method)-s the operation would exhibit something similar to exponentiality
  • An aspect of a plan may not be fulfilled, whether planned or by automatic action initiated by the comp sys These can be grouped as a computer system desired impact upon the world This links back to desired operation Both comp sys material desired impact and social and other non material reality aspect-s are important. Non material example: this price should be proposed, non liberal dlc example: are non material real thing-s only confined to social agreement-s what about agreement to self what of imagination all concept-s, software, proposed reality-s exist within the real although these are implicitly known to be real by the software, the comp sys proposing new concept-s, software so was discussing that these implicit aspects are not included in suggested changes and also do not need to be modelled explicitly as part of the reality model also need to know if there is a direct linkage between what is modelled by the comp sys and what is input into the reality model So what is thought to be modelled is a system of material organisation does one model prod proc-s seperately was: how described task-s alter material however this may be part of more general model including material organisation So structures as well as definition-s of how one may alter such material perhaps different protocol specification-s to allow for flexible modelling, mentioned increase detail before -- social contract-s such as what a price for a certain item is with a certain trader So what the comp sys is to propose is the task-s which must occur at when to achieve the ( desired delta to reality, the desired specified reality state) achieving a specified state is a new idea these task-s can be automatically applied to automaton-s or not social contract-s which could be enacted through time with plans there will be variance and divergance so divergent pathways are able to be navigated, one-s can be chosen to then see further into the future, and also to calculate prediction-s for different ( social contract)-s can be proposed so the proposed components seem seperate to that which is modelled mostly well the plan format is different but that is so minus social contract: modelling material and how it may change modelling how to alter material to achieve desired outcome-s whilst it is evident that the model-s data is present in the output, i cant say that the output component-s form a part of the same model as the input model The output is not an altered reality model Although the possibility of the output to be in some reality model format may be explored So work occur-s, this can be the output of the model, possible path-s into the reality Then the same is found in the input model This is utilising the general concept that one model-s different aspect-s of reality and the comp sys proposes path-s through reality

LINE 25

Physical


method ŧ

of single process liftime management is: all piece exists in store some may be in memory in the main process if one requests piece from the main process, if the piece is memory they get that reference, if the piece is in store then it is loaded in

one can send lTSD to another process, it is ensured that any required store-s are sent over too if one saves data in a secondary process then the data in all other processes should be updated to maintain consistency as in memory information is always more valid than that of the store if the other stores were not to be notified then they would hold data that was behind the store which is unnaceptable for method ŧ

another way to deal with this situation would be to have no other process store-s save data but the primary, when a secondary process wishes to save data it must communicate its data to the primary process store which then updates the memory information and savs the information to the store

this leaves the organisation of a pieces lifetime in a more disorganised state than when there is only a single process With a single process there is only ever at most ( a memory instance) and ( a store instance) of data, the memory instance is always the truth if present

with multiple processes there is ( a store instance) and there can then be ( a memory instance) in the primary process, when within one process this functions as before each secondary process

processing

sending lTSD across a process ( process args, pipe, queue): primary-> secondary: any store-s referenced by the lTSD are ensured to be contained within the secondary as bridging stores secondary-> secondary: any store-s referenced by the lTSD are ensured to be contained within the secondary as bridging stores

loading a piece: primary: if in memory then return that instance, otherwise load from store secondary: if in primary memory, then request and recieve that, if not then { if the store supports multi state splitting, retrieve ourselves, otherwise retrieve from primary interface}, it is not saved to any memory track in the secondary process

creating a new piece: primary: if tracked by store, saved to store in init-> _dataInit secondary: if tracked by store, sent to primary process where it is tracked in memory and saved

saving a piece: primary: the piece is saved to the store secondary: { if the store supports multi state splitting then save on the secondary, if not then the primary must save}, if the data is present in the primary process memory then send the data to the primary process where the lTSD data is replaced

retaining: works as primary with bridging deletion: triggering deletion on a bridging store also triggers deletion on primary, ensure no double deletion reload: { if multi state splitting then load in bridging, otherwise load in primary and send over}

thoughout this process, any operations that can be done in a scondary process should be done in a secondary process

method đ

this follows mark~s advice about transaction id each piece requested is loaded from the store, perhaps optional retrieval of existing version then the multiple instances of that piece in memory have a mechanism to decide which is written to the store those which are never asked to be saved are never saved to the store

it is designed around a system in which memory cannot be quickly shared as in shared memory between all software aspects it permits all aspects to operate without having to send data between each other, all aspects retrive and push data directly to the store sharing data between aspects may be desirable, that could be layered upon this system

the ( transaction id)-s must be stored in a

those that are asked to be saved are done so in a sequence when viewed on 1d time that which becomes the truth in store can be decided by its location in this sequence instances can also provide a 1d priority value which can be purely interpreted or interpreted in combination with the sequence value a 1d time value can also be extracted from the time of retrieval the number of succsessful saves can also be passed

so some combination of any of ( retrieval sequence, saving sequence, priority sequence) perhaps a custom function could even be used, which takes the data and its position in all 3 sequences and returns a boolean as to whether it should be saved knowing the length of the retrieval sequence may also be useful, the savingLocation will always be the current last in the sequence, it is not currently imagined to obtain the priority ahead of saving mark~s version would only return true if ( savingSequence== 0)

def lTSDSavingDecider( lTSD: LongTermStorageData, retrievalLocation: int, retrievalSequenceLength: int, attemptedSaveLocation: int, priorityLocation: float, deleted: bool):...
def firstRequestSaves( lTSD: LongTermStorageData, retrievalLocation: int, retrievalSequenceLength: int, attemptedSaveLocation: int, priorityLocation: float, deleted: bool):
	return savingLocation== 0

the ability to test whether a save will go through would be useful maybe this could be a central mechanism, a test can be performed to determine whether an aspect should save and then a save always performs the save once a piece is saved then all other lTSD instances across all process-s can be notified using the event system, maybe they could listen for succsessful saves, non sucsessful or either one can also recieve a boolean value as to whether a save was succsessful upon saving

the transaction states and ... should be saved to a non process location so that multiple process-s can interact with this system in the same way this suggests the utility of a default definition of the saving format as would also be useful with lTSD rep-s one can inherit from the definition and ( edit, add, remove) entries

it makes more sense in this scenario to enforce that all store interfaces must support multi state duplication, if not then a bridging store can be created the existance of store multi state duplication will not be discussed in # processing

this approach may be more condusive to multiple programming language support, and does not require a constructed primary process which requires constant communication with constructed secondary process-s the consistant requirement for new retrieval from the store may

DataEditor example

This indicates a break from current single process operation and existing method~s compatibility must be established Currently the data editor retrieves the LTSD and also produces a copy, The changes performed by the interactor are instantly performed upon the copy and when the user wishes to save, the data of the main instance is set and is then saved to the store the copy is then reconstructed

If ## method đ is applied then the data retriever can retrieve a single copy, edits are then made to that copy and then saved to the store if a save occurs during editing, then the aspect can test for a save result and can report this back to the user, if need to force then do so if dont need to force and just check then this can be presented as a forceful operation in the gui

Instances where one may want to share a reference

Of course multiple aspects may share the same reference manually by accessing one another if possible but do i wish to support retrieving an existing reference This would be trivial to implement should the need arise this may only be possible within a single process as to obtain data from an other process, the execution position of that cursor would have to be directed towards retrieval and sending of that data unless that data can be retrieved an aspect other that that process~s executor

processing

sending lTSD across a process ( process args, pipe, queue): any: it must be ensured that the lTSD~s store is established on the process, the same for all referenced data # copying all referenced data the data from the original process is sent over and the retrieval index is incremented past the maximum so maximum is obtained, new entry added

loading a piece: the piece is obtained from the store, the max transaction index is ( obtained, incremented, set for the retrieved piece)

creating a new piece: the piece is saved to the store and then a transaction index of 0 is recorded

!n testing for a piece~s save result: the save reuslt function is run in the calling process to obtain the input data ( retrievalSequenceLength, savingLocation) must be retrieved from the store

saving a piece: the save result function must be run at the time of saving, then the originating process-s interface is used to save the data a lock may be needed to prevent multiple process-s from performing a overlapping save operation

retaining: this works the same as of present where the data can be added to a dictionary upon the calling process-s store

deletion: the data should be removed from the store, all retained memory instances should be removed from the store, whether the existing transaction-s should be permitted to save and un-delete data is unknown. All should be notified and ejected from retainment but then they could either be blocked or not blocked from future saving, they could also have the data deleted from quick memory. Other processes than the originating will not be initially aware of the change and so a check for deletion must be performed upon all operations that this will affect, maybe those within the lTSD

reload: the data should be retrieved from the store using the originating instance

copying all referenced data

this must be done during the serialisation this is a good case for implementing the concept of a serialisation session in which common data is made available, this avoids passing the data around as arguments to each function the current entry points into serialisation can stay the same and initialise the session, calling the actual serialisation initialisation, then all componenrts of serialisation that wish to recursively do so can call the internal function. This allows one to effectively track the lifetime of a session

I wish to implement versioning where I save full data as reference and subsequent changes upon that I wish to implement changing data instances between any two versions of their base class We could define transforms between data that are used both when applying changes to instances due to a detected change in their base class as well as using these transforms to step between versions

Do these transforms only go towards newer versions or both directions?

So to save the first version is easy, I save the whole data. How do I now save changes? I want it unique to certain types So if the base data type is a string then I want to use a custom function to produce the change data If we cant find such a function for that type or if the base data type has changed then we must store a new copy It would be good to go down into a hierarchy structure to determine the highest level in the tree changes So for any runtime editable class that doesnt have its own implementation here, we can test for changes upon the data contained within it

class T():
	tree: int
	nice: str
	poy: list[ T]

So we may start off with an instance:

t= T(
	tree= 4,
	nice= "hit"
	poy= { T( 6, "f", []), T( 2, "r", [])}
)

So this instance may have their tree field change to 9, we could loop over the attributes and do equality checks here we need to store the varName and the new value. So varChange: ( "tree", 9) In order to make this bidirectional we would either need to store both values or we would work out the value prior in a transform chain

So we store no integer data transforms but we do for a string So one changes nice to "hiyt" We loop over and do equality checks and find that the value for nice is unequal. We go to nice and because we can see that str supports data transforms we can query how the nice string has transformed using the string data transform functions. In the string transform functions we find that a y has been added at index 2 maybe i will only do line based string transforms but for this example it could be: addition: ( 2, "y") this is bidirectional we need to store the final result as a walk down the hierarchy so we could use a passthrough name: pass: ( "nice", addition: ( 2, y)) This remains bidirectional The transformation names are unique to the type that is providing them. With the example directly above, it could be infered what the type of nice at the prior stage is from calculating the stack before but this is not bidirectional. Instead we can store the type that provides that transformation so: ( ( "Some.Module", "RuntimeEditable"), "pass", ( "nice", [ ( ( "builtins", int), "addition", ( 2, y))]))

maybe we could also add an option for denying a transform so the above layer just defaults to a sweeping change

As a bigger example, lets say we then change the poy var, handling additions and removals from looking at the before and after is very difficult and I dont want to implement it, it is a very similar problem to the string difference one and so if all elements of the list could be converted to bytes then we could use the ndiff function on a base64 ascii string and then use that arrangement information on the original objects. I think for now a list shouldnt have custom transforms so lets use a set in this example instead

So we add one more T instance and we change an existing one

ADD:
T( 9, "o", [])

CHANGE:
T(
  6, --> 44
  "f",
  []
)

with sets we could loop through the before, if they arent in the after then theyre "notPresent" and ( with the remaining in the before, if there are extra in the after set the those extra are "newlyPresent" So here we would be saying:

( ( "Some.Module", "RuntimeEditable"), [ ( "varDelta", ( "poy", [ ( ( "builtins", "set"), "notPresent", [ T( 6, "f", []), ( ( "builtins", "set"), "newlyPresent", [ T( 9, "o", []), T( 44, "f", [])])])]))])

We can store transforms as a list

operation= tuple[ operationId, operationData]
deltaData= tuple[ type| None, [ operation]]

Like the above example, some

For runtime editables we can loop through attributes and check for ones that notPresent and ones that are newlyPresent and use them similar to the set varChange, notPresent, newlyPresent work on dynamically allocable types, what of non dynamically allocable types?, these all still work as on slotted classes we cannot add non specified attributes but we can still remove existing ones

w So for a type implementing data transforms we need:

  • A function that takes two memory instances and produces an ordered list of bidirectional transforms from one to another Transforms be like ( typeSerialisation, transformId, transformData)
  • A function that takes an object instance and a transform list and a direction and can transform the given object instance using the transform list and direction

When storing we need to know if we are looking at a list of transforms or if its just a restore of the data We can give an indication of this for every full store, maybe a bool called full store next to the data This frees us up to do all sorts of full store deletions if possible to save space, update latest to full to increase speed, all sorts of cool shit We can solve the ordered list backwards to go ( to -> from)

How will in memory instanciation change

Currently when a code aspect wants a python object in all store implementations, i check if it is in the memory dictionary. If it is then I return that and if it isnt then I load from the store into memory, save it to the dictionary and then return that

I think our loading function should take an optional version id parameter, if it isnt specified then the store should provide the latest version Then we still have a instanciatedData dict, non versioned acts as before but if we are versioned, instead of storing the memory instance as the value, we store another dict which is accessed by the version id

The saving function should have the same interface, it will have to use knowledge about the versioned status contained on the ltsd to determine how to save it, it should have a toggle for including commiter data set to off by default, ( username, pcname)

I think a function on the store to get all of the versions in chronological creation order should be good Other functions which return data related to ltsd need an option to specify version

maybe just do fsjson store first

Extra data on ltsd

Version number, version commit info, special: movement funcs

Can check to see if specified default factory or value is present, can check to see if class is instanciable with no arguments If not then can try and find the type of those arguments in the signature and use that to find default values for those types and use those instances to instanciate the desired type.

Complex types

What about complex types found in the typing module, the above method could generate an instance of tuple-> () but it couldnt generate an instance of tuple[ str]-> ( "")

list is easy as its arguments dont indicate a prescence

dict is the same

NewType( "A", str) should have the same default instance as str

Union[ g, h, j] can just have the default instance of the element at index 0. this is g here

Any can have a default instance of None

ForwardRef needs to have the same default instance of the thing that its refering to


Aspects of a manual approach to functionality ennaction

For production planning asking a question such as "I want to maintain my body" is being tackled by me with the bounding of descriptions and then a discerning body. Some would then see a progression from this to be to train a neural net what this english sentence means but I think english is far too messy. Developing a method of describing desire which can succinctly encode as much intent as "i want to maintain my body" that I can then be picked...

Were I to develop systems before developing a more concise way of relating my understanding to a computers parsing and operational ability and developing methods of adapting code to changing worlds then I would have to use existing methods or I can use said existing methods to create a bespoke way of describing how the computer may infer this way. Should I develop this description I want it to be fairly frictionless to update this description and to update existing data created with this description.

Existing modelling methods are:

Comparison of existing modelling methods

I have felt a desire to not describe systems using formats that I have created but to stick with the general nature of a general programming language, I wish to avoid a situation where I effectively create a custom modelling language within software. Especially considering that the primary code should be accepting of continuous iteration itself. An example is in modelling production processes as I believe the implementor of the modelling environment will not be able to adequately predict the complexity needed to model Maximising speed of code writing helps mitigate this but in some cases such as those that would require implementation of a powerful general modelling environment, these tools already exist

Using the production process example and modelling within a python class definition:

I should maintain a definition of a common protocol which other elements of my programmatic system can use to see what to
expect when interacting with a production process definition. This could be a `getOutputType()` method. A discerning programmatic instance may
want to know how many work hours it is predicted to take n number of people. We could expose a `workHours( nPeople: int)` function or a
`workHours( people: list[ Person])` function that could calculate it based upon a persons skills. We could expose a list of action objects
which themselves have a work hours value which we can sum up.

With a production process one could encode ( the structure which is common to all of that set of processes such as all lettuce production) within
a class definition and then instances can then be used to store data which is unique to that instanciation of that product definition

Here the ( type-> instance) structure where we suck platos marble cock isnt holding up and that is fine, it is a tool that I am using to achieve my goal

Moving on from this descriptive zone, the process of creating instances of platonic things works well with commonly repeated data such as numbers

So a lot of bulk actions take a list of filters as arguments

How should versioned data be handled, possible options to exclude versioned, to only retieve latest version, nth version, first version

Current operation of FilterList

So currently, in a FilterList, we hold a list of filters and we can either filter our operating store id pair s to a specific index or we can filter given store id pair s to a specific index

This filtration is done in memory using the filters defined in Filters.py

currently a filter is passed the entire store id pairs

Desired operation of FilterList

For filtration that can occur within a store to be done within the store to increase speed Data that is loaded from the store into memory may have been modified in memory and so this data should either be filtered using python functions or it should be saved to the store before filtration The stores ability to handle the filtration operations needs to be tested,

when a store filters ids it is nice to make sure the programmer of the store interface doesnt have to worry about whether or not a filter changes its behaviour based upon information from data outside of its own stored data if such a filter existed, then we would have to synchronise the filter index between all of the store id pair s so that that filter could be run with all store id pair s at that level, all the other types of filters would be run solely with one store at a time. This allows multiple filters to be passed to a store at once and the store interface then has the opportunity to group the behaviour together is the complexity of synchornization worth the ability to implement filters whose behaviour changes based on information outside a single store? this problem also presents itself in splitting up ( versioned, in memory, other) data items. This is discussed here ## Commonalities

dont finish filtration prematurely if a blank id list is filtered as we could have addition filters

so we are given a command to filter our operating ids to index 5 we have operating ids: [ [ store0, [ 1, 2, 3, 4, 5, 6, 7]], [ store1, [ 1, 2, 3, 4, 5]], [ store2, [ 1, 2, 3, 4, 5, 6, 7, 8, 9]] ] we have filters a, s, d, f, g, h, j

so we are filtering to h

so if we were only operating in one store: [ store0, [ 1, 2, 3, 4, 5, 6, 7]]

it would be nice to be able to pass multiple

so: we need to pass the filters to the store and the store needs to determine which ones it can group and apply we need to organize filters into rearrangable and non rearrangable at some point those which couldnt be called by the store need to be called using the python implementation

this should be designed in tandem with the way in which the bulk actions use filtration

currently filters are instanciated to construct runtime information for the gui widgets and so it seems to make sense to not pass the data to an initialiser and instead define the data format and pass it along with the class, a custom func or the class init can be called to construct gui This allows the stores to retrieve the nescecary data in a manner that is observable using the filters data specification

Store interface bulk action filtration

What are the places where filtration is used? bulk editing bulk prescence manipulation data retrieval

data retieval also can retrieve solely ids given a sequence of filters

So bulk editing, in mongo, could pass filter to update func if passed filters fully supported, else could resolve to a id, version pair list which is used in the filter slot of update subject to same need to either save all memory data and do in store or do in memory

same thing in prescence manipulation

same thing in data retrival

so thees need to observe list take action, it is specific behaviour to the store so from this example, the store needs access to the full id retrieval from self functionality that the FilterList needs including seperation of versioned data and memory prescence handling

so it seems like this could either be done in the filter list or in a seperate line of functionality seperate from the store and the filter list it could be done in the store to be honest if

so the filter type needs to be passed along with any data needed because the graphical list needs to update the FilterLists understanding of what data is being matched against, it maybe makes sense to store the data in a specified format in a .data field

Implementation

Ungeneralised operation description

Mongo store bulk retrieval convertedDict= attempt conversion of filter sequence into filter dict We cant tell if the filter list will match versioned data unless a filter which specifically excludes versioned was passed or we have a exclude versioned option Continuing design as if versioned unsupported and no data is currently in memory if full conversion find filter= convertedDict else: need to construct filter which can be used with a find either the convertedDict has a mix of converted filters or it has none so need to look at each block of filtration each separated by being either memory filter s or a filter dict if the start block is a filter dict then the initial ids need to be retrieved using that if the start block is a memory filter s then we need to load all the ids in each subsequent block gets an input of the previous steps output upon handling a memory filter s block, we need to call each filters function passing the resultant ids between them upon handling a filter dict block we need to call find with an id projection we then end up with the resultant ids which we can use to construct the find filter | We then use the find filter to retrieve the desired information | If we cant be sure that versioned data isnt present then version data should be loaded into memory and processed there This is split off before the full convertedDict is constructed All versions should be loaded as our units include each version If a filter is specified which denotes specific versions such as ( the 2 newest) then this can influence this process If there is no versioned data then we dont need to process, if there is no non versioned data then we dont need to process that The results of the version filtration can be merged with the non versioned This process could be optimised by handling all of the full data version entries in the store | Data that is already present in memory holds newer information than that in the store and should be referred to When unversioned, in memory representations should be respected over store representations When versioned then there will never be a store saved version which is also loaded in memory !!in memory versions should represent a new version and not a change to the one they were loaded from, it should still store the one it was loaded from, intrinsics become: ( version, loaded version, loaded commit time, loaded type) this doesnt mean that we dont have to process in memory versioned data as unversioned memory data should be processed seperately. should it be excluded from main processing, should main processing be ammended with its results? versioned memory data may

need to figure out versioned and memory prescence seperation and merging
also prescence separation denies the ability of filters to reliably base there filtration upon the prescence of more than one piece of data
given that we are seperating the data
syncronisation is required
	
	Lets do this specific to mongo first
	So if we arent sure that versioned data isnt present we need to process it seperately, filtration is already discussed above
	We need to load all of the versioned data in to do this, we can use a filter to do this,
		we could apply a negative of this filter when loading the initial ids for the main filtration
		doing otherwise would possibly lead to sqewey results in the main filtration and the processing time wouldnt be much slower than placing the negative filter
	We can also filter present memory items in the same filtration as the versioned data to simplify the potential synchronisation process
		a negative filter can be applied to exluf

Generalisations of operation

Application of generalisations in operation description

How incremental filtering will be implemented

How versioned data can be split into its own data items and passed through the filtration structures in the store and FilterList

Synchronisation of grouped filtration s

so to filter the data there are different ways of stepping through the filters

so when one reaches a synchronisation point then all the others have to be filtered to that point too then when all are done before the synchronisation point, the results can be collated and then filtered by the filter i currently dont have a need to do this in anywhere but default memory we can do this for all filters requiring synchronisation in that block

the filtration now needs to be divided up again between its original participants

so in order to determine how to split the now joined results up we can look at the results of each pathway before the joined filter, for each result, if the same result is present in the output of the joined filter then it is passed on, if not then it is not passed on so if it is the intersection of both sequences, for this reason i want to enforce that filters requiring synchronisation dont add any data to their result

the split results can then be lined up as inputs to the different methods then the filtration can be resumed

the synchronisaiton events are in the same order and number for each simultaneous filtration so it oculd be that we aim to run each till the end in order and then when a synchronisation event is found in the first process then the others are brought up

i would prefer to determine the final course of action before taking it, so if we are filtering g, h, j, then we know to filter g to 4, ... so i envision a controlling software aspect which has a generic interface to all types of filtration

Filtration of pieces that are desired to be filtered in memory

So there is a problem to solve

when filtering ids which we know we want to filter in memory, what do we do?

the in memory filter implementation s shouldnt implement their own mechanism s to check if the data is in memory,
the results of its actions should be either using in memory data or loading data into memory

currently filters call out to the store for information which may partially load the data in or use in memory data if present in this scenario we know that the filtered ids are present in memory should two software aspects be allowed to operate on data at once, removal from memory should be dissallowed during this, or this aspect must make compromises

just realised that in memory filtration cant be completely disregarded if no data or tags are processed by the filter sequence
this is because versions in memory are not saved to the store yet and their ids therefore wouldnt get captured by the filter, they must be considered

so when filtering pieces that are known to be in memory then we dont need to worry about conserving memory space as those pieces are already in quick memory when filtering pieces that are in the store in memory then we can balance the loading and the filtration i imagine loading a batch in, filtering that completely, then loading another batch in this would prevent repeated loading of the same piece, and prevents repeated deconstruction of version pieces an issue here is that synchronisation requires all ids to be filtered at once a way around this is

another way is for filters to call a batch loader and the batch loader functionality handles returning in memory items how can this be made to play with not continuously deconstructing and loading in pieces well for the non memory ones that must be filtered in to memory, they can be done in batches still and then the filters can call the batch load the batch loader then should return the in memory items the other alternative would be passing the data to the filter directlyo

{ well so option ¶ is to not load pieces before running the filters this means that each time a filter is run, the piece will be loaded, processed, the id is returned and the piece has no references and is garbage collected then another filter in the chain is called and loads that same piece in again

option ŧ where piece s is loaded in before running the filters and piece s is accessed generically from the filter this can be done by batch loading before running the filters so that batch is in memory and then filtering that batch this would have to be done in between synchronizations as synchronisations require all of the data upon a synchronisation the filter would be called with all ids, as the filter method uses a generic method of obtaining data, it can call that and still obtain the needed information

option ← where pieces are loaded in before running and then passed to the filter for use to retain optimisations where the whole data is not needed to be loaded, the sequence could be scanned and then the required data passed to it this would require the filter to specify what it needs as if it were arguments to the bulk load function this may have an effect on other aspects of software

known bad ways loading all pieces in memory and then filtering } i feel most comfortable with ŧ a method of quickly retieving the in memory instances in important when calling bulk load imo maybe this could be special behaviour for a filter which desires multiple ids, idk,

this is easy to confuse with previous approaches so building up the existing structure sounds good to resist unwanted ideas

Order problem demonstration


Filter grouping only works on filters with behaviour that only changes based upon properties of each input item If behaviour is based upon values outside of a single item then a rearrangement of its position in the order

we have people sorted by their body shape and hair color

if we have a list of people: z= fat, ginger x= fat, blonde c= skinny, blonde v= skinny, brown b= skinny, ginger n= middle, ginger

and we pass them through the filters: q= not blonde w= is skinny

in the filter order q, w we get ( in stages): z, v, b, n v, b

in the filter order w, q we get: c, v, b v, b

If we replace filter q with: return the first two results of the hair color field in alphabetical order

in order q, w we get: x, c c

in order w, q we get: c, v, b c, v

Here the results are different as the filtration was dependant upon factors other than the set properties of each evaluated item The new filter q should be marked as such This marking allows discerning bodies to preserve the specified order of q, w and in this scenario the consistant result of c will be given upon each examination


Filter operation mechanism hinting

So something that I need to do is that if a filter requires information from all simultaneously passed data ids then it can specify q

this If q is so then this filter cant be analysed with some ids in seperation with the results being combined later, they must all be analysed together In the mongo example, provided the stated operation is occuring synchonously, we must wait at each filter with q and then filter the other held filters, this also mans that filters marked with q shouldnt be grouped At this step, if no memory items needing filtration are present then this filter can be handled in the store if memory items are needing filtration then all filtration for this filter must be done in memory

q is also the same property which determines whether the filter can be reordered or not

it may be useful to know what contents of a data piece a filter cares about, id, tags, data, this helps speed up whether or not versioned pieces can be passed however the detection mechanisms can get complicated, might still be good

maybe q is better described as the filter relying on data which can change as a result of a positioning of the filter in the list then it can be explained that filtering based on no because synchronisation is needed when the filtr relies on them all being present and not just positioning

when a filter is adding data, if it adds 5 pieces each time it is run then this means it must be synchronized

It is useful to know whether a filter filters out all versioned data as this can be used to exclude versioned processing which can be difficult due to its delta nature this is shown in w If a filter which adds data items is present after one which implements w then it is still unclear whether or not versioned data may need to be processed This could be a marker about whether it may add versioned items but i think it is good to have a more general marker without adding too many this is shown in e

when queried with a bool mechanism question maybe the filter could return an unknown if that is the case

current in flux extra filter hints

q= whether or not the filter requires information from all input ( data items including splits along version id borders) ( default doesnt require info from all input) w= whether or not the filter filters out all versioned data ( default doesnt filter all) e= whether or not the filter adds new data items ( default doesnt add)

Version handling

Option to ignore version s maybe want a operating store id pair s with no version s in to act just as quickly as if no versioned data was supported versioned data likely cant be queried by the store so it needs to be handled in memory, i still think the store should make this call so the store needs an easy way of either specifying that certain ids must be done in memory or it needs an asy way of filtering them in memory

so most operation operate upon specific versions within a piece of versioned data however deletion, duplication, can make sense to operate on whole versioned data ids movement, could move individual versions but makes more sense to move whole versioned data ids addition, is unknown as its interface is unknown

this is seemingly just an issue for prescence manipulation and not retrieval or editing filter lists see ( unversioned wholes), ( versioned versions) as individual pieces so what if filters that act upon data are passed to a prescence manipulation operation

scenario:

store= [
	"3j90435802nikw3": {
		"Versions": {
			"INIT": {
				"cheese"= 34
			},
			"43rjnjk3309": {
				"deltaRemove"= "cheese"
			}
		}
	},
	"590rj3klscsxcnxkj": {
		"Data": {
			"cheese"= "ff"
		}
	},
	"kjfim4momjv0": {
		"Data": {
			"brushes"= 90
		}
	},
]

delete(
	filterList= [ ( HasAttr, "cheese")],
	knownQuantity= None,
):
	So here if we are operating on whole versioned data items what would we do
		We could check versions at a specific position, 
		we could check a range of versions and combine their results,
		we could pass all versioned data through the data filters,
		we could deny all versioned data through the data filters,

		so in the filtration step
			if we are passing all versioned through filters
				versioned data will only be removed by non data processes
				if we are able to detect that there are only data filters then we can return all versioned

			maybe we just filter all as normal but disregard the versions for now, one should know when calling the function that this is how it will operate

so for now prescence manipulation can solely treat all filtration of versioned by operating on the whole versioned data and not individual versions

New filters

Version is nth version, including negative index and slices, option to pass or reject unversioned data

Component filter logic gates matching

The name of the combination mode is based upon the condition that needs to be met for a piece to pass

logicGates= {
	"AND": lambda a, b: ( a== True) and ( b== True),         # pass all          | piece passes if it passes all components
	"OR": lambda a, b: ( a== True) or ( b== True),           # pass any          | piece passes if it passes any components
	"NOR": lambda a, b: ( not a) and ( not b),               # fail all          | piece passes if it fails all components
	"NAND": lambda a, b: not ( ( a== True) and ( b== True)), # fail any          | piece passes if it fails any components
	"XNOR": lambda a, b: a== b,                              # all pass or fail  | piece passes if all components pass or fail
	"XOR": lambda a, b: a!= b,                               # pass or fail once | piece passes if all the piece is only passed| failed once # Will only fire if <= 2 component filters are present
}
for gate in logicGates:
	print( "\n"+ gate)
	for a in range( 2):
		for s in range( 2):
			print( a, s, logicGates[ gate]( a, s))

Code changes

To filters

Need to specify is behaviour is dependant on factors other than th           e state of a single evaluated data item, this includes if a field could possibly vary over time, uh oh, if multiple sources are allowed to hold memory references then this could be a wide occurance
	Although anyone passing the filters to the filter list should not change the data themselves during operation,
	Unplanned changes in the results would have occured regardless

	The mark still needs to be places if e.g. a single evaluated data items property is called which runs a custom function that queries the societal time

Component filter takes filters as children not components
Only one component filter, takes logic gates, not called logic gate names

How do I implement a filter which matches data which has "only" the specified tags.

Could it be an addition to the HasTag such as some wildcard parsing, could it be a new component to a component, can it already be achieved with a component filter, can it be achieved with multiple filters in a list?

Multiple solutions may exist

So if we use components then all components must pass the result, so if we pass ones that all have the specified tags then in order to negate the ones who have additional tags then wed have to recieve data from that filter regardless The components of the filtering system have very little communication and so any component which carries out the opperation

Maybe a custom component like OnlyHasTag or maybe a little better if performant is to do it using

Initial discussion -> 2023-03-16 more -> 2023-03-24 more ( related) -> How to update data instances when their specification changes

To what extent do I want the c fuse api to function outside of simpl

What of existing libraries? fusepy looks good, it supports linux and mac, windows support was gonna be work anyway, why not commit to this?

This revolves around what we think of as globality

I think being more specific with the language we refer to this as is better as original ideas are forgotten to more described aspects

currently stored data in the config store: Known stores | as LongTermStoreInterfaceInstances Manual filter list configurations | as ManualFilterLists Manual query group configurations | as ManualQueryGroups Recently chosen types | as RecentTypes Manual save group configurations | as ManualSavingGroups

This data does not specify its context It is up to the discerning body to interperet the contents of the config store A problem with that approach appears when desigining new data You could design a method of interaction that inteferes with a previous one or not

Usernames

For user configuration we could store the user name in the tag We should probably separate this from the operating systems concept of users

For now i dont see why this has to be anything other than a text entry box Storing this within the config store is simple, last entered string

also want a config option for whether we should submit personal data upon version updates.

Where this data should be stored is dependant upon how this software should be used Talking in terms of software aspects, data and synchronisation/ collaboration

Well if it organised around the individual then this could influence my decisions So I want planning around personal survival to be the focus with the ability to then combine efforts things must be modelled then to allow general work

So what would the benefits of congregating processing power to one place be? well the weaknesses include the homogonisation of decisions, the ability to combine processing to a central location but to then easily split off from that seems important this design doesnt have to be implemented right away if desired as I need to get current design for production in place Elements often have to be considered in conjunction to fit well and this is balanced with mental ability to hold these things in consideration this can be augmented by physical stores of mental models

The data of the current user is closely tied to the computer and so we could store it there

What about not maintaining them in memory and loading from the store when needed?

Often I come across things which i am not currently implementing due to my lack of person power that i have to put off these may be hard to remember and come back to so making a note of these in the original recording place which is then collated with all others in another place would be good

I come accross aspects of design that I know will break if something else breaks, if this cannot be dynamically resolved then it should be noted so that one can be notified quickly of what has broken upon changing something In code it isnt as simple as looking at where code was called as the thing that is changed may be a concept not realised in code

Storing code in a ltds allows me to gain the advantages brought about by these features It could facilitate my ability to update data to changing specifications Modelling using a format created using a general programming language# Updating data to changing formats

Another side effect could be loading the majority of the codebase from a possibly remote store which would be nice

An issue to bear in mind would be editing Ive discussed Delta changes previously. What of my ability to use git and its associated features. Well git has versioning along with an associated user who brought about that change We could implement a git store Fuck it I think it is better to use my approach

So the most direct translation of my current coding method would be to store a UTF 8 string for each current python file. Then if we are using directory organisation we could store these as tags and then create an Arbitrary hierarchy using tags. We could either use the data id as the name or we could store a name value along with the code

Storing as an ast seems ok too. We would lose the ability to single line # comment unless we manually facilitate them. To do so we can embed unique id symbols within the ast which corrospond to each comment line, we can then generate the ast and then either delete the comment line and find a way to reference its place within the ast or we can relate the ids to each comment in a mapping

Do specifications remain within one data piece or are they still contained within a python module this problem remains the same whether we choose ast or utf8 storage. Well what is interacting with these specifications as it stands without ltds code.

  • Code both inside and outside of the module in which the specifications are currently defined uses them
  • Were we to have specification versioning already then we would be storing an id value for each version of our module file

So I dont think we need to seperate them out for now. Maybe if we were to make code more modular but this would seemingly nescesitate editor code or editor facilitation code on my end.

Storing the module version seems ok

How to store version information of the data, changes depending on code storage method. If we are storing specifications contained within modules we would definitely store it at the top level of the data, if we are storing specifications independantly then we could still place it in the top level

COULD ALSO CREATE A FILTER TO CATCH ALL INSTANCES IN STORES AFFECTED BY A CHANGE NOT JUST FOR SPECIFICATIONED TYPE INSTANCES COULD RUN CUSTOM CODE OVER THEM

So could be automatic can discern changes from analysis of both specifications, then construct delta transformations between them. May also want different changes to the automatically discerned ones, so an addition of a new variable on the specification could be handled in many different ways, it could be set to the value of another variable, to a new value, the result of a custom function written at the time of question position

When are these updates performed? after code saving, after a coding session, manually, one some sort of initial program loading, upon loading of the individual data pieces, a combination of the aforementioned

So what are all the developer intended ways that a datas specification can change? q= Intention for the name to change w= attribute deletion e= attribute addition r= the type to change t= default value to change y= default factory to change u= the addition of a default value i= the removal of a default value o= the addition of a default factory p= the removal of a default factory

things to do to an instance as a result of intended actions

q-> Update the name, has to be done in a specific order to avoid potential clashes w-> remove that attribute ( notPresent) e-> Add that attribute, has to be done in a specific order to avoid potential clashes The value of that attribute should be that of the default factory if present, failing that the default value if present. If we develop a system of Describing default instance values for types then that should be used. Failing that then we should either prompt for a ( value, script) or set it to None depending on the context ( newlyPresent) r-> I imagine in some scenarios we would want to leave instances alone and in some we would want to update them. To update them, if we had a method of Checking if an object instance matches a certain complex type definition then we could check if the the attribute matches the new type and if not then we can assign a default value if one can be found, else we can use a prompt value, script, None t-> I dont think anything should happen here. ( varChange) Maybe one should have the option to take action, maybe i wish to update all instances which had the default value, maybe if this is occuring within an environment where we are prompting this is a desirable suggestion to present y-> unlike t we cant reliably detect data created by the factory and so we do nothing u-> All current instances would have then been created without a default value so nothing should be done i-> So instances created with the old default value may be wished to be updated, we can detect the old value and set it to the new one o-> All current instances would have then been created without a default factory so nothing should be done p-> unlike i, we cant detect the old value so nothing should be done

It seems as though the ability to control what is prompted would be useful Dont forget that we need to update not only the root of the saved data but also all contained data too

So list required bulk utilities and then list the ones used by each intended action response

So this list is constructed regardless of the need to update non root data

editing [ Change the name of an attribute at a specific path, Remove an attribute at a specific path, Add an attribute at a specific path with a specific value, Change the value of an attribute at a specific path to a specific value ] finding [ search for all instances of a specification where an attribute doesnt match a specific complex type definition - only possible once loaded into memory search for all instances of a specification where an attribute matches a specific value ] searching for all instances of a specification wouldnt work with a structure of saying attribute equals and would have to be The existing idea of finding data relies on finding specific data id items, granted we could find items which contain instances of a specification matching the requirements but we are also not in a position to return data about where those instances are in that datas tree which is nescessary, it doesnt seem like a good fit for this structure unless this method of working was implemented into the planned bulk action system, then the resultant updates of ( r, t, i)

Updating the value of an attribute at a path to a scripts value has to happen in python memory and so is not useful to have the store manage this

How does the order resolution work? Well we cant change a name to one which already exists There wont ever be a final state with clashing names So we can do all deletions first, then we can do replacements in the replacements we can ammass all pairs where replacement 0 has a new name which is the same as replacmeent 1 s old name and we must then find an order of those pairs in which for every pair, the second replacement operation occurs before the first any replacements not in a pair can be done in any order then we can do additions


Updating any data object not just at the root of a storage item is a nescessary action and needs to be considered when deciding how to implement bulk actions in the store so:

How may one perform all of the required bulk actions on root and root contained data

So our approach here changes depending on what event we are updating our instances We can do it at the time of the specification change We can choose to do a manual update of all instances to their latest version We can choose to manualy update instances of a specific specification to a specific version including rollbacks We can update instances of a specific specification to a specific version upon the data being loaded

Not that all of these methods of updating need to be supported So two methods there:

  • Update instances of specific specifications to a specific specification version at an arbitrary point in time
  • Update the instances of specific specifications to a specific specification version upon loading into memory, ( deserialisation| store function| surrounding code)

One option to avoid updating all data is to save a map on the store of what should be updated and then to apply that upon the next loading, for mongodb the map could be in the form of a collection The map could then be applied during deserialisation, maybe within the incrementalLoader? Maybe it could be applied in the background and at deserialisation if not finished yet One option would be to save the specification version information for the whole tree at the root upon saving which can then be interpreted by the programmatic aspect carrying out the specification update. Could this be accumulated into memory and then applied, or maybe could be done in one fell swoop, depends on the db Well doesnt the choice between how to approach this depend on specifics of the store? Why not call a function to change instances of a specification to a specific version and then the store implementation handles the update method, ( directly editing data, storing a map to apply on load, ...)

If going this route then it seems like this implementation would be seperate to other bulk action implementations? Well if going this route then it could be a command to a bulk action function but seemingly seperate to the, ( value change, value removal) ones

How one may detect the intended change by analysing the two different specifications, and failing automatic detection, what different ways one may retrieve the data needed to perform the changes

Unsure whether to perform these checks on memory representations of the specificationed class| aST reps| another format

e- name is present in to, isnt in from, hasnt been marked as a rename by user - quickMem or aST w- name is present in from, isnt in to, hasnt been marked as a rename by user - quickMem or aST q- a deletion and addition pair have been marked a name change - marked by coding body r- Two matching named attributes annotated types fail an equality check - quickMem| quickMem loaded from aST t- Two matching named attributes default valuesrfail an equality check wouldn suffice as not everything implements an equality check, we could instead compare the ast if the nodes implement an equality chekc or the string ast if they dont- quickMem aST| string aST y- function objects may implement equality checks but we arent doing anything here u- we dont do anything here i- we could do the same as t o- we dont do anything here p- we dont do anything here

When ( a name in from isnt present in to) and ( a name in to isnt present in from) then we dont know if this is a rename or a ( deletion, addition) In order to find out we have to prompt in which we must discern renames by linking deletions and additions Maybe through a linking ui

It is still important to discern whether this is an intended rename or a deletion and addition

Some tasks require complex user definition, instead of creating custom data formats to facilitate this user modelling, one may want to interact with custom code.

How to store code using a long term data store

Could make it possible to run maap from code and then custom code could just be registered during that process

it seems looping through batches of items is desirable as well as looping through items

#PlacementPosUndefined

During discernment of a production method for a specific thing Different described bounded systems may be composed together to form such a chain but just as the composition of the bounded systems we choose from is up for debate, so is their structure when faced with such a chain. Work can be done after the construction of such a chain to rearrange components to form better interoperability

This is where existing descrption methods fall flat as I have a desire to describe the whole system but have no tool to describe the whole within a singular tool leaving me dependant on data conversions between many different tools taking up much time. Such a tool should be self developing

To allow the planning of survival means production and reproduction and following that, the production and reproduction of any goal following that.

This takes the form of analysing bodily needs and identifying what needs to be produced in order to perpetuate its survival Bodily needs

Does this take the form of goal entry and then bodily survival could be part of goal entry

How might this be entered? Well I could enter it as, ⦓ : I want to meet these bodily nutrient requirements for myself I want to maintain a space= Ǭ which contains air at a specific temperature and has boundaries that do not let water pass through I want to maintain a place within Ǭ which can be used to clean my body I want to maintain my computer planning systems I want to maintain my bodily function I want to build a spacefaring vehicle

Something of note here is that the drinking and the body cleaning water can both come from the same source although they are different entries in this list
This I think is mainly relevant to the discerning body of descriptions of production

Upon stating that I wish to maintain my bodies function, I intended to mention healthcare, I have done this because of the present human distinction between eating and healthcare but these could easily both come under bodily maintenance
That said cleaning the body, maintaing a waterproof space with air temp could come under maintaining bodily function. Maintaining the computer systems may come under this as well as they can be part of all of these tasks.
Building a spacefaring vehicle wouldnt come under this task

How to create a thing capable of understanding my requests and producing sets of potential courses of action that can fulfil the request. These can be comprised of control of automated manipulation things and manipulation events that should occur

Plans are produced with weighting towards desirable outcomes, potential combinations exhibit an increasing rate of increase of combinations as the number of potential combinations increase, it wouldnt be feasible 
- Descri

So somehow this software must infer many things from these simple commands, It seems we can describe the process of inference or create a thing which can. Existing modelling methods vs development of a new self improving one

How to describe stated goals and their attainment

In a computer parsable format

So this automatic system should look at our simply descirbed goals and be able to An issue here is that describing goals in a simple manner might be silly. A body with language has the ability to take a very complex problem and describe it in a consice way. "I wish to live", "I wish to maintain my bodily function". Im not sure that we should assume that they know what they mean when they say that. If they could describe what they wish for in better detail would this encapsulate everything that I wanted the program to do? So we want to describe general desires Upon this path we want to describe methods of producing certain things

We come to describe the attainment of survival means This isnt nescessarily a question of, I want x things at these times, but I want to maintain this systemic structure

Describing the desire for system configuration

When describing a component to fit with in the programmatic system such as a production process I am imagining programmatic elements to scan production processes for what they produce, A discerning body to arrange production processes in the attainment of production of a specific thing

This sounds like it would involve a description method able to describe the situation of one and a discerning body able to calculate maintenance requirements

An example would be to describe a human body, which may have opaque internals at first but able to connect up such a system later on

If protecting a systemic structure from decay is the goal then in order to avoid describing a state of decay one would have to simulate the dynamics of decay of the system. if one was describing a system with a dynamic model of it, how would you specify an ideal state?

One could specify that they wish for these inputs to be met One could specify that they wish for certain configurations to be present, maintain prescence, have periodic prescence.

It either gets created when ( q= someone creates an instance within any software aspect) or ( w= someone calls a stores _getLTSD function) or ( e= a store interface aspect calls the _getLTSDStoreSpecific function)

So when a user performs w, I want to add that data to the _instanciatedData so that if someone else performs w later, it will return the same object in memory and not create a new one. If someone performs q then I wish this to be saved too

When all of the references to an object obtained in a software aspect by calling q or w go out of scope then I wish for the stores reference to be deleted along with its entry within the _instanciatedData structure. So unversioned data is stored in the base dict and versioned data is stored within a list held within the base dict

  • So when adding object to the _instanciatedData dict, we should instead use weakrefs. This is not the end of our task as we still need to remove entries from this structure when they have been garbage collected So we need a method of doing this, the weakref constructor takes a callback argument which it seems like we can use, the finalize class could also be usefull. Considering we are using weakrefs, del could also be useful. As the docs mention, finalize is a better choice than del due to the dependance of del on the interpreter implementation. So ref callback vs finalize

Ref callback is called when the object is about to be finalized Finalize is an equivelant of a weakref but the callback appears to take arguments one

Although the weakref.ref callback is passed the ref, it is dead by the time the callback gets it

  • So set the callback to one that will remove it from _instanciatedData

  • Need to call finalize.peek()[ 0] when retrieving from the store

  • Is e affected by this change?, no i dont think so


An old note I found in the _deleteData function:

"""
OBSCOMMENT What should deleting data do? We have a concept of a singular piece of data. This piece can be located within the long term store
and it can also be located in ram accessed by python software.
When we delete data it seems obvious to remove it from the long term store but what is less obvious is whether to remove the in memory
representation. The main concern is the interests of other software aspects that may be interacting with that data, should they be notified?
If we delete the ram data without notifying then it may cause other software aspects to fail
If we dont then this may break a concept worth following, it may be resaved
I think either, notify so other aspects can stop doing shit then delete it, or we do nothing

for now just remove from long term store
"""

So we have our ltds base class and implementations

The current things i wish to address are that:

  • I have introduced versioning When I saved I was just writing the ltsd data into the data rep on Now if it is versioned I must construct the diffs here I must then do different saving actions depending on if it is new or not, I think this is a good task for the saving func because we can just do lil check for prescence

    If it is still unversioned then I must just save the data

  • Solve the whole noRefsInSerialisedData thing, including in the serialise funcs Can be solved in conjunction with instance specific data should the config store be deemed to be instance specific

  • Possibly unifying ram management or making it more modular Could call different So if designing for possibility of no memory management, would that still function? How would So it could be moved to the baseclass which could then call out to private functions which are implemented in the subclasses notfications seem ok, a lock combined with notifs probs to much to plan around emission could be before the action so listeners can prepare just beforehand, or maybe they want it after for new data idk could call the signal and then callers must call the corrosponding function during

    On writing a _writeFirstVersion function: We cant rely on ltsd init wanting to save and so we cant save during dataInit Unless we restrict non saving to only occur within the store and use a locking mechanism Although i maybe dont want it to be possible to write a store which doesnt save Well we are writing the dataInit func ourselves so we can d our initial save there Well we would still need to call out to implemented code in order to do the saving We could either call the base _saveData function and have that always handle initial states or we could create a _saveInitial function which takes a versioned: bool argument One thing that can also be done on initial save is to check for existing data with the desired id I prefer the _saveInitial method as it decreases the required work on the more often called _saveData function

    so _saveInitial is called from the _dataInit func, do we do version detection there

  • Cleanup terminology ltsd wrapped representation ( data that includes tags, id, ...) -> lTSDData data that is of interest representation -> data

    stored in quick memory -> {}QuickMem stored in long term memory -> {}LongStore

  • Logging transactions As data items, how? So I dont know if as data items but if so then the filter system can be used fairly easy if need but dont see the need atm

  • Many actions for saving, deletion, A lot of these would still require slow actions per item such as saving especially version diff calculations.

So to list current baseclass data and ( actions it implements and actions it doesnt), ignoring the extension protocols

Attribs

  • name

_initialSaveLock

Implemented actions

  • getCopy ->
  • getNonCentralCopy ->
  • getStoreIdPair ->
  • getDatasIntrinsicValue ->

  • _dataInit -> loads it into tracked mem, does initial ( blockable) save
  • _saveData -> handles versioned vs not, calls _writeNewVersion or _write
  • _deleteData ->
  • _reloadData ( might be able to make this generic using getLTSDFromStore)
  • getLTSD -> gets ltsd, calls implemented _getLTSDFromStore
  • getDatasAttributeValue -> finds it in mem, if not calls _getDatasAttributeValueFromStore
  • getQuickMemoryPrescence
  • _getDataFromQuickMemory

Non implemented actions

  • dataInit -> -
  • saveData -> -
  • deleteData -> -
  • reloadData -> -
  • doesDataExistInStore ->
  • getAllDataIdsInStore ->
  • getDatasTags ->
  • _getDatasNonStoreIntrinsicValue -> goes generic if we have generic mm, no bc implementation could still offer optimisation
  • getDatasAttributeValue -> -
  • getPythonObject -> -
  • getIdsWithQueryData ->
  • getQueryEditingUIAndInitialData ->

  • getVersionData -> if no id given, return latest, if specified, return all Do i need this? is it not created due to the store needing it to update? Upon saving we need to construct delta information, that means pulling in the data from the previous version. Now does this require a function to be implemented by the store? well if we want just the data then yes. If we can get by on searching for the previous commit using getDatasVersions and then using that to load in the ltsd, i think this will mean loading it as not tracked by store or finishing design on memory collection
  • _writeInitial ->
  • _writeInitialVersion ->
  • _write ->
  • _writeNewVersion -> should still update the tags
  • _deleteDataFromStore
  • getDatasVersions -> should return none for unversioned, or all
  • isDataVersioned -> could use getDatasVersions but this could be a significantly slower operation
  • getLTSDFromStore

Description of current ram management of data

So currently upon ltsd instanciation, it is saved to the instance dict If we request a load then if it is in the dict we pull from there if we delete then its deleted It is added when first pulling from the store

ref count dropping from store so when all the refs drop out of scope we remove it from

Communication between different instances

currently signals on save and delete

e.g. many places use the ltsii I may want to come in and update the list halfway through well

big concern here is data state new pickers up what to do in different scenarios

REQUIRED indexing into a sets body not into its contained items as in # IntrinsicEquals indexing into an arbitrary paths last contained item need a return from indexing when it fails

IntrinsicEquals

So the data for this is the type of intrinsic, no index accepted intrinsics are unique to each piece A piece can be unversioned data or versioned verision data

class Intrinsics( IntFlag):
	DATA_ID= auto()
	TAGS= auto()
	STORE_NAME= auto()
	STORE_TYPE= auto()

	VERSIONED_STATUS= auto()

	VERSION_ID= auto()
	VERSION_COMMITING_AUTHOR_NAME= auto()
	VERSION_COMMITING_COMPUTER_NAME= auto()
	VERSION_COMMIT_TIME= auto()
	VERSION_TYPE= auto()

intrinsicTypes= {
	DATA_ID                         : str,
	TAGS                            : set[ Tag],
	STORE_NAME                      : str,
	STORE_TYPE                      : type,

	VERSIONED_STATUS                : VersionedStatus,

	VERSION_ID                      : str| None,
	VERSION_COMMITING_AUTHOR_NAME   : str| None,
	VERSION_COMMITING_COMPUTER_NAME : str| None,
	VERSION_COMMIT_TIME             : float| None,
	VERSION_TYPE                    : CommitType| None,
}

so the data can be one from Intrinsics followed by data of the corrosponding type

value should be converted to the rep that mongo uses

intrinsicQueries= {
	DATA_ID                         : { "_id": value},
	TAGS                            : { "Tags # Construct index into set": { "$all": list( value)}},
	STORE_NAME                      : str, # If not the current store name then can append a filter matching nothing, otherwise matching everything
	STORE_TYPE                      : type, # If not mongo then can append a filter matching nothing, otherwise matching everything

	VERSION_STATUS                  : { "Versions": { "$exists", "bool dependant on value"}},

	# For the below an index must be constructed into the version info
	VERSION_ID                      : ( { "Versions": { "$elemMatch": { "0.0": value}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_COMMITING_AUTHOR_NAME   : ( { "Versions": { "$elemMatch": { "0.1": value}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_COMMITING_COMPUTER_NAME : ( { "Versions": { "$elemMatch": { "0.2": value}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_COMMIT_TIME             : ( { "Versions": { "$elemMatch": { "0.3": processed}}}, { "Versions.$": 1, "Tags": 1}),
	VERSION_TYPE                    : ( { "Versions": { "$elemMatch": { "0.4": value}}}, { "Versions.$": 1, "Tags": 1}),
}

An issue here is that a find query returns matching documents not subsections of documents, how does one filter to specific versions I was going to project to "_id" but how would one do this for versions $elemMatch can be used in the find query to return the version maybe just $ in the projection i think this means that versions version piece s must be filtered for seperately to unversioned pieces It seems like it isnt possible to return specific data here as $ in projection does not accept index s after is A view could be created for retrieving version piece data which would make it possible to not need to retrieve the whole piece in this scenario

HasTag

Data is a single string

{ "Tags # Construct index into set": { "$elemMatch": data}},

DataAtPathEquals

So the data input is ( path, value) value can be ran through a conversion to mongos format after the path is converted to mongos format maybe it is told to be in python in mongo s rep context and then convert to mongo it can be sent to a query op

{ convertedPath: value}

DataAtPathOfType

The input is ( path, pythonType) if it is a custom object then the module and type will be stored at the index specified by path in a dict so we can convert the pythonType into a module and qualname

If it is a bson compliant type then it will be stored in the bson representation this type can be matched against using the $type query operator, this operator takes a bson type ( number or alias) type can take multiple aliases to check against The type can be checked for compliance and the alias found by using a dict

bsonComplianceAndMongoAlias= {
	None: "null",
	bool: "bool",
	int: [ "int", "long"],
	bson.Int64: "long",
	float: "double",
	str: "string",
	list: "array",
	dict: "object",
	datetime: "date",
	bson.Regex: "regex",
	re.compiled: "regex",
	bson.Binary: "binData",
	bson.ObjectId: "objectid",
	bson.DBRef: "dbPointer",
	# bson.Code: Unknown mongo bson alias
	bytes: "binData",
}

This could possibly be stored or somewhat calculated in the bson conversion set

with dict, $type would pass matches for custom objects which isnt desired so dict should instead check for an object with a lack of a prescence of the type and module string s

Combination

so combinations are

ALL_PASS: AND
ANY_PASS: OR
ALL_FAIL: NOR
ANY_FAIL: NAND
ALL_PASS_OR_ALL_FAIL: XNOR
DIFFERENCE_EXISTS: XOR
ALL_PASS: { "$and": [ components]}
ANY_PASS: { "$or": [ components]}
ANY_FAIL: { "$not": { "$and": [ components]}}
ALL_FAIL: { "$nor": [ components]}

ALL_PASS_OR_ALL_FAIL: { "$or": [ { "$and": [ components]}, { "$nor": [ components]}]}
DIFFERENCE_EXISTS: { "$not": { "$or": [ { "$and": [ components]}, { "$nor": [ components]}]}}

the more complex combinations will calculate filters twice unless mongo db optimises this

for situations where we need a list of queries for the children we can recursively call the query dict generate function maybe then we can specify not to combine the reuslts maybe we could get a nice return value for if there was not a full conversion


There seems to be an unresolved issue in versioning that needs exploration

Combining different filters results together

Each filter can be filtered individually but we can group them together

the function creating these should only ever be passed rearrangable filters

This is one method of Creating and updating models to a changing world

Modelling using a format created using a general programming language# Updating data to changing formats

Instead of implementing thousand multiples for every mt, we can instead allow entry of multiples in a seperate manner.

In addition to this we could default to a different notation than kilo, giga, micro e.t.c and use our own I guess the x10n notation works here For compatibility we can list these:

  • quetta Q 1030 nonillion
  • ronna R 1027 octillion
  • yotta Y 1024 septillion
  • zetta Z 1021 sextillion
  • exa E 1018 quintillion
  • peta P 1015 quadrillion
  • tera T 1012 trillion
  • giga G 109 billion
  • mega M 106 million
  • kilo k 103 thousand
  • hecto h 102 hundred
  • deka da 101 ten
  • deci d 10-1 tenth
  • centi c 10-2 hundredth
  • milli m 10-3 thousandth
  • micro μ 10-6 millionth
  • nano n 10-9 billionth
  • pico p 10-12 trillionth
  • femto f 10-15 quadrillionth
  • atto a 10-18 quintillionth
  • zepto z 10-21 sextillionth
  • yocto y 10-24 septillionth
  • ronto r 10-27 octillionth
  • quecto q 10-30 nonillionth
flowchart RL
	Processing
	Controller0
	Processing <-->|"( command, return) connection pair"| Controller0
	Processing <-->|"( command, return) connection pair"| Controller1
	Processing <-->|"( command, return) connection pair"| Controller2
	Processing <-->|"( command, return) connection pair"| Controller3
	Processing <-->|"( command, return) connection pair"| Controller4

Here we use a general programming language to describe a specification of data which we then use the concept of instanciation to then create variations of this model specification in ram. In order to save these variations of the specification between program instanciation we must transfer their data to long term storage.

We may state that all instances will have custom values to fill the fields specified, instances do not often have the ability to override the behaviour of functions from their class.

An example is production description using production processes

So we can use this model at our program runtime to instanciate instances, we then may have bespoke ui to edit these elements such as the production process or we can use autogenerated ui. When wanting to continue into the future with these definitions we must save them to a type of long term data store.

Updating data to changing formats

IF THIS IS COOL IT WILL NOT ONLY UPDATE THE TOP LEVEL DATA STORED IN A LTDS BUT WILL ALSO BE ABLE TO UPDATE DATA STORED WITHIN THE TOP LEVEL DATA NON TOP LEVEL DATA STILL PRESENTS THE SAME PROBLEMS AS TOP LEVEL DATA

When altering the specification our data in the long term stores could possibly be made invalid. Within the currently planned Bulk data editing actions if we held in some capacity the knowledge of what had changed and what versions the data was created on, we could queue up some bulk actions such as the addition of a new attribute to a certain value, removal of all values, setting some named data to the value of other named data. There are problems with this, it relies on us having external knowledge of what our selected data is and the new format.

We could make this process at least semi automatic by having knowledge of what specification each piece of data is conforming to To do this we must store a reference with the data or store a reference in the specifiction name. I think it is best to store an id seperate to the specification name I could store a version number that counts upwards or I could store a uuid. I am currently defining the specification in the code. I will always have to have the specification represented in the data instances in the code provided that it follows a similar structure to python. I am defining the specification in my coding language and therefore I would like to not redefine it elsewhere. This means that when the code changes I would like to store versions of either that file or the specification that I can then reference in the data created by it. If I were to implement versioning for data in a long term data store then I could store the code in a long term data store and use that system to version the code also

Versioning data in a long term data store How to store code using a long term data store What does using this method mean for data created from specifications that are not versioned using your system. Well if we are purely versioning by ltds versioning then no types but those present in a ltds can be versioned and we must not store a version value for any data not in a ltds. I think this is ok.


If modelling using data type instances within a custom language, then a big opposing force is the rigidity of systems constructed around your modelling method to change. These need to be dealt with. Systems constructed around modelling include:

  • the users point of interaction with creating the models such as a gui design
  • instances following the protocol of the model

We could use a general language itself to model structure Structural definition within a programming language isnt good for storing numerous pieces of small data but is very good at defining complex relationships and behavior whilst still providing a consistant interface/ protocol. Modelling using custom created formats can be limiting but tools such as python are designed with general purpose in mind and can adapt well to many situations.

A common interface may be needed or a way to emulate my minds ability to dynamically locate interface points.

How we may go about informing programmatic system components of custom written code can be discussed here: Implementation of user specified code

To use an example, if we wanted to model the production of potatos then we could describe this using a python class or set of functions.

On the querying interface

Well we have a choice here: Before we were conducting a find query which would return the id if it matched But we have verisons with delta changes, do we just let the find query work over the versioned data as is and return the id and all its versions if it matches? In order to search the versions data we would need to construct each version and present them in a way that is accessible to mongodb I think this is too much and the finder should simply perform operations upon the data as present in the store. I think if a field other than the find field would be useful when querying data then that could be implemented

Getting version ids of data

So to get version ids we must get fullDict[ "Versions"][ *][ 0] Maybe there is a mongodb method of this yeah so we can do a map

cursor= self.store.find( 
	filter= queryData,
	projection= {
		"versionId": {
			"$map": {
				"input": "$a",
				"as": "versionId",
				"in": { "$first": "$$versionId"}
			}
		}
	}
)
# Watch out with large data sets, maybe this should not all be pulled into memory at once
[ tuple( doc.values()) for doc in list( cursor)]

There seems to be a thing called aggregations which may be quicker than map but i know map for now and the speed isnt too pressing

Important data structures

[!Whitebg] Data available from anywhere in a mounting session Arbitrary hierarchy definition and directory data structure


Caching

  • Construct entire idToDataOnIdData dictionary from all data in lTSI
  • Construct the filter list and connect their invalidation function, if they are a tip, set their update function
  • Input the ids into the root directory causing the id filtration through the blue line on the right. When a tip is hit, the blue process pauses and the yellow begins. The yellow process applies additional organisation and also constructs the dict[ namePlusExts(UniqueToDirectory), id] on index 1 of the directories runtime variable

[!whitebg] AHDMountingFiltrationCachingProcess


Old structure and new byte saving and retrieval mechanism

[!whitebg] Old mounting function and data communication organisation

So if current places want byte data associated with a communicationId then they call getBytes. If there isnt currently any byte data associated with that communicationId then getBytes calls getBytesFromStore in order to load in from lTSI.

I want saveBytes to check if its versioned and do a delayed save if so whilst also saving that to the latestBytes dict for getBytesFromStore to read from

getBytes -> getBytesWithCommunicationId saveBytes

So the problem is that succsessive opperations often make delta changes inneffective and to make them more effective one must group those changes in time and execute them after the group has finished

Issues currently unsolved here is that queries during that group before the save need to recieve data from the updated data even if it hasnt been pushed to the store yet

So the getBytesFromStore and saveBytes functions are always manipulating the bytes, whether in a FSFileRep or not they take the bytes and return the bytes

This means that we might be able to do the grouping in them If saveBytes is called and the data is versioned then we should queue the byte data and only after the timeout should it be saved. getBytesFromStore should read this byte data whenever called if it is present

This byte data needs to be accessible to the tracking thread. Without directly sharing memory we can use pipes, queues and managers in the multiprocessing library

This can be done with only one thread, each versioned piece of data can store its pending bytes in a piece of memory

The thread can loop. It loops from the first recorded version grouping, if the difference in time between the current time and the attempted save time is larger then it will take the bytes and

In succinction, the main thread. When it encounters a request to save byte data, if it is not versioned, it carries out the save If it is versioned then:

  • If the tracker is not running on its loop then it is started, the time of attempted save is recorded along with the requested byte data.
  • If the tracker is running then we update the time of attempted save as well as the byte data

This requires a mechanism for the main thread to update the specific entry or the main thread can just dump all of the requests into a pipe and then the secondary thread can pop out all of the requests, if there are multiple for one piece of data then it can use the most recent one. It can the re add any nescessary entries that it has taken out

I think the ltsd data will need to be passed back and forth in order to save unless the other thread consistently handles saving and also loading

[!whitebg] New mounting byte saving and retrieval organisation


Runtime directory structure, filter list, map updating

Another thing to solve here is how to do succsessive alterations to the cached ids in the filter lists as well as ...

So to list what alterations are made to the cached representation of the store, then assign them to symbols for quick reference and then to explain what must be done upon each alteration to facilitate expected behaviour

idfuse initiatordescription
qcreateaddition of new id to the store which should be filtered within current rules to be at a specified path
wmkdiradd a directory into the aHD with a specified name at the specified path
ermdirremove a directory from the aHD along with all recursive contained directories. Remove all data recusively held from the store, could either remove from the store or remove from the root filter, ( depends on what the root filter is, may need a special inclusion tag if this is desired)
runlinkthis should either remove the specified data from the store or should remove it from the root filter
trenamerename of a "file" in place
yrenamerename of a directory in place
urenamemovement of a "file"
irenamemovement of a directory
orenamerename and movement of a "file"
prenamerename and movement of a directory

q: need to be able to be given an aHD with filters, additional filtration rules and a directory within it and be able to produce data which will be filtered to that directory. That would reqiure additional definition alongside the filters which defines how to produce data that will pass it or to devise a way to describe both the filtration and creation of filtration matching data simoultaneously. Both of those solutions sound like too much work currently for the return so we can just create data that will match the has tags setup as well as the additional filtration rules. This piece of datas prescence in other directories may change as a result and therefore it should be incrementally run through the aHD filter list and map updates If a file cant be filtered to be in a specified dir then an error can be raised NOTUNIQ PERMISSIONERROR ADDRNOTAVAIL

w: need to place that directory within the aHD, if no additional filtration options are enabled then i need to construct the runtime information and the map.

  • If tips only is enabled, the newly created directory will not have any child directories so these dont need to be considered when created the new directory however should the new directory filter to any ids then all containing directories maps will need to be updated

  • If exclusive is enable then the new directories maps will be affected by the prior directories in the yellow order and directories after the new one will be affected. This means the exclusivity set should be reconstructed from prior directories and then if the new directory filters to anything, the directories after the new one should have their maps updated in the yellow order. Well really they just need to have the new directories filtered - existing exclusions removed from their maps

  • If both roots only and exlusive is enabled, then if the new directory filters to any ids then the prior exclusivity in the yellow order should be calculated and used within the new directories additional filtration. Then all new additions to the exclusivity set should be removed from all directories maps following the new directory in the yellow order. This includes non tip exclusions as all containing directories are after the new directory in the yellow order.

    this can be done by creating the runtime data for the new directory and then forcing map recreation for:

    • none if no extra filtration
    • those above and including the new if tip-s only
    • all if exclusive regardless of tip-s only status

e: to avoid the envisioned unlink operation from removing from directories that are already being removed, we can ammass a list of all contained data of the removed directory, remove it and any contained directories from the aHD, deal with the repurcusions and then remove the contained data. So the repurcusions of removing a directory, none if no additional filtration. If tips only then all data that would have been excluded from containing directories will be removed anyway. If exclusive then there are no data ids that would have been added to the exclusive set that wont be removed anyway. So there are no repurcusions that wont get cleaned up by removing the data. No directories after in yellow will want the data as it is removed well one would expect in current filesystem-s for the contained data to be removed, that can be done if the data is contained in other places, what is to be done? well those pieces can either be ignored or removed, well just removed for now error-s: not a dir

r: lets only envision deleting data from the store for now. So when a piece of data is unlinked it should be deleted from the store, it should be removed from any maps that it is present in and also the idToDataOnIdData dictionary. A list of referncing directories opn the IdToDataOnIdData dictionary could be useful here.

t: Need to recalculate tha datas storage method within the store. Need to see what it can be stored as within the store and also the nescessity of the fsfilewrapper, We can use the function in HierarchyAndMounting. Also the namePlusExts needs to be updated on idToDataOnIdData and also on each of the maps which reference it, it may be useful to maintain a list of referencing directories within idToDataOnIdData.

y: If the directory has custom filters then nothing needs to be done apart from change the name. If the directory has no custom filters then all ids filtered underneath it should have their tag which matches the directory name changed to the new name. Now what happens to other directories that contain those renamed files. They may have been doing other things with that tag information in their filters and so they will need refiltering. When searching for directories that will need to be updated, if we are not distinguishing between those with custom filters or not then we must conclude that the root directory is the place at which refiltering should occur as we are not sure if this new tag will be matched against the root directory. If we are distinguishing then for each . One possible solution to avoid a complete recalculation would be to implement incremental changes to the operating ids worked upon by a filter list. This would reqiure updates to the filtration mechanism so existing caches can be used in the new calculation and then updated.

  • If tips only is enabled then any containing directory of the refiltering directory will need to reevaluate their ... Is this worth speculation now that it seems we must do complete recalculations, well we are only doing a complete recalculation if we arent regarding the custom filter prescence of directories containing the updated ids then we use the root causing an update to all of the filter list results which prompts a full map update

the action taken here is general to action that must be taken upon the bulk update of tags

u: this faces the same challenge as q, we need to be able to place an id at a specific location. So for the specified directory path for all directories with no filter we can add those tags. In regards to additional filtration rules, these may exclude the file form actually endin up where it was desired to be, this is fine. Given the new tags other directories may need to hold this in their map, incrementaly filtering this through the entire aHD seems like a solution to achieving this Depending on

i: When moving a directory we must take all of the directories above the moved directories initial position which are not using custom filters and ammass their names as ⌿. We then take all of the directories above the moved directories desired position which are not using custom filters and ammass their names as ⤘. On all the data recursively contained within the moved directory, tags matching ⌿ should be removed and ⤘ tags should be added. The directory is then moved within the aHD, the filter list is relinked The changed data needs to be incrementally run through the aHD Then to completely/ not incrementally update the additional filtration rules of the following If exclusivity is active then z must be run then the moved directories map must be updated along with all its contained. if only tips only is active then x must be run if exclusivity is active then c must be run

This isnt precise and is hard to think through

o: the same action as t must be taken initially where the datas storage method is reevaluated and applied. Then we must change the namePlusExts in idToDataOnIdData, remove the id from any referencing maps, readjust the tags as in n, and then incrementally filter the data through the aHD

p: ...

commonalities thought to be needed:

  • z need to calculate excluded data up to an arbitrary directory following the yellow path
  • x need to run additional filtration rules and reconstruct the map on all containing directories of a given directory from the bottom up
  • c need to run additional filtration rules and reconstruct the map on all directories following a given directory in the yellow order
  • v the need for incrementally adjusting the operating ids of a filter list
  • b Support for incremental updating
  • n replacing a list of datas tags with that of all names of directories which do not have custom filters in a given directory path and then running an incremental filter through the aHD for them
  • m takes a list of ids, removes them from all directories maps referenced in l, removes the idToDataOnIdData entry, removes them from the store
  • l storage of a list of each directory which references a specific id on idToDataOnIdData

Succinct rewrite referrencing symolized commonalities:

does it affect filter list filtration, if so incrementally or wholly all filter list filtration affects additional filtration if it doesnt affect filter list filtratio then does it affect additional filtration

q: create data with the desired name using the HierarchyAndMounting function to determine saved format. Then run n with only the new data and the desired path

w: create the directory with the specified name at the specified path in the aHD, no custom filter. Then construct the filter list Filter the ids Link the filter list. If exlusivity is enabled then run z up to the new directory Then update the mapping If only tips only is enabled then run x If exlusvity is enabled then run c

e: Amass unique list of ids under the proposed removed directory, remove that directory from the aHD and from l for each contained id and then run m over all ammassed ids

r: runs m on the specified id

t: run getPythonInstanceFromBytes from HierarchyAndMounting with CHECK_IF_NEEDED using the new filename. Update namePlusExts ( on each directory map which references the id in l), ( in idToDataOnIdData)

y: Rename the directories name If the directory is using custom filters then we are done, otherwise For all child ids which have a tag matching the old directory name, replace that tag with the new name. Then run an incremental refresh of the entire aHD with those changed ids ( b)

u: run n on the data

i: ...

o: ...

p: ...


Caching speed improvement

Another thing to solve is the speed of caching. This fits in with the need to do bulk operations upon a store If a bulk operation retrieves data then it can return somthing which can grab the results incrementally this incremental object can also have an option to dump all into memory at once The stores can optionally implement my filters and can throw an error if the filter isnt supported

Or if the filter isnt supported they can do the default, pull into memory and use python filter implementation There should then be an option to either throw an error on unsupported filters or use

# Example mongodb implementation
def filterHandling( self, filters, errorOnUnsupported= False)-> tuple[ bool, Optional[ list[ str]]]:
	filterQuery= dict()
	for filter in filters:
		if type( filter) in self.supportedFilters:
			currentFilterQuery: dict= self.supportedFilters[ type( filter)]( filter)
			# Might not be as easy as updating the filter dict with each conversion
			filterQuery.update( currentFilterQuery)

	# Do we use the query to find ids here, do we take some sort of argument to say what aspects of the data to return here, do we just return the filterQuery as queryData
	# It should become more clear as we go on
	return filterQuery


found, filterHandling= getDataForType( store, "handleFilters")
if found:
	filterHandling( store, filter, )

Basal metabolic rate is defined as the bodys requirement at rest

They catagorise nutritional requirements

Proteins

Needed for:

  • Tissue production and repair
  • Hormone manufacture
  • Fluid balance maintenance
  • All seem to reference just material synthesis

Defines essential amino acids in older humans as:

  • Leucine
  • Isoleucine
  • Lysine
  • Methionine
  • Phenylalanine
  • Threonine
  • Tryptophan
  • Valine
  • Histidine States that Arginine is essential in younger humans

Proteins that can be broken down into all of these amino acids are described as complete. Biologically value is a casual scale for describing the adequacy of the protein within a food States that within animals, except for gelatin, all proteins are complete. It is stated that plant protein such as nuts, peas, beans, lentils and soya have a limited number of amino acids. It is stated that older humans should receive roughly one third of their daily requirements as complete protein ot recieve the optimum intake of amino acids.

Vitamins

table 4 of Applied nutrition and dietetics- Joan huskisson ( doubtful of bias towards meat production, hasnt looked into alternatives)

VitaminMajor sourcesfunctionseffect of deficiencychemical and physiological characteristicsrecommended daily adult allowance
retinol ( A)dairy fats, fish oil, egg yolk, liver
carotenecarrots, pumpkin, spinach, broccoli, apricots, yellow peaches
thiamine ( B1)wholegrain products, green leafy veg, milk, meats ( particularly non muscle organs)
riboflavinsame as thiamine
niacingreen leafy veg, wholegrain cereals, lean carcas, bird carcas, fish carcas, non muscle carcas components
( B12)milk, carcas, liver, kidneys
folic aciddark green veg, yeast, liver, kidneys
absorbic acid ( C)citrus fruits, tomato-s, potato-s, strawberry-s
( D)sunlight, margarine, fortified milk, fish liver oil
tocopherols ( E)vegetable oil-s, peas, beans, leafy vegetable-s, wheatgerm
( K)green leafy vegetables

non animal production alternatives to ( retinol ( A), ( B12)) needed

Fats

Composed of carbon and hydrogen The most dense form of energy with roughly 37J3 / 1g

...

One idea that I like is not related to where it is defined but to enter the values as offsets from other things, then we can choose our base frame when we come to output construction

  • Versioning data in a long term data store# Supporting existing systemic aspects

  • Discussion of the viability of modelling using a general programming language and having the discerning body access common interfaces, maybe compare with modelling in a modelled format. There is already this discussion in this knowledge base, i think a discussion more scoped to the viability of the interfaces is good. Play through some examples of modelling reality system components and having a imagined discerning body interpret them. Using the modelled modelling format is easier to conceptualise, maybe start with that

  • Make deserialiser not freak out when a field marked with NotRequired isnt present

  • Construct concept of general long term format which is then made to be complient with different long term stores such as json or tabular for example. Functions like getIntrinsicValueFromDictionaryRepresentation could take a compliance set arg or just be compliance set specific

  • Cleanup of the solution to overcome the initiation of the knownStoreInterfaces from the config store as the config store is being loaded, setting a state upon the loading functions seems nicer than passing data through all of the chains

  • Addition of mass data manipulation with delayed loading structure Potential performance bottlenecks Retrieving data from a long term store

  • debugging code that we dont want to occur at a full speed runtime, maybe layer them on with delta change system


  • The store intefaces currently have only functions to retrieve single pieces of data and also lack the ability to loop through a returned result, only pulling each one into memory as it is needed. When people do mass data queying then this may feel strain. I would recommend shifting towards this sequential loading structure with more functions designed around mass data manipulation Retrieving data from a long term store

constructed idea of indexing context could be generalised, level-s, format-s spanning level-s unifying-conversion-and-indexing

filtration and calling functions wihch filter can cause infinite repetitive execution ( if called in a object initialisation sequence&& if the filtration process instanciates data that it filters), the coding body can query filters for this purpose. More in 2023-05-22

Indexing into a container and recording or editing a result is a common operation and would do well to have a generic implementation integrated with the indexing context-s concept

So I am dealing with the issue that I want to retrieve information about the attributes present on a store. Previously I was loading the serialised data and then finding a top level attribute there. Now I want this to "work with versions" I also want to use dot notation in an attribute query to return sub attributes So using current methods, in order to retrieve versioned datas attribute value, I must load the versioned data in

Mongodbs interface has a structure in which I can retireve a piece of data which knows how to get the desired data and then I can then iterate though that piece of data in order to retrieve items from the store one by one. The used ones go out of scope and are cleaned up by pythons garbage collection as you go. This is instead of loading them all into memory at once.

Long Term Data Store api

So this can be detected when replacements are detected by the used sequence matcher

this creation behaviour can be toggled with a default argument to deltaConstruction

For this, we have: an index where the replacement occurs the length of index s on q which are being lost the length of index s on w which are being gained

example:

4
6 ( 4, 5, 6, 7, 8, 9)
3 ( 4, 5, 6)

7
3 ( 7, 8, 9)
5 ( 7, 8, 9, 10, 11)

11
2 ( 11, 12)
2 ( 11, 12)

So if the length of q and w are the same then it is clear that we can do a direct comparison

if they are of differing lengths then there are different courses of action one could take to obtain a contained delta transform

we could match each possible combination and stick with the ones that create the smallest delta size, unsure how to measure size we could match each possible combination whilst retaining the joined nature of the recursive deltas in the index dimension

i think the easiest is just to match the first block, then any remaining differences can be noted as an insertion or deletion

so each of these will recieve the data as an input the possible responses will be a TransformationType

class TransformType( IntFlag):
	UNIDIRECTIONAL_DELTA= auto()
	BIDIRECTIONAL_DELTA= auto()
	NONE= auto()
	FULL= auto()

all recursive delta lists should match the preffered and fallback specified of the container if this is passed to the creation of the contained delta list then this is enforced by calculateDeltaData so possible returns will be One of UNIDIRECTIONAL_DELTA| BIDIRECTIONAL_DELTA NONE FULL

If NONE is returned then that will be in objection to the equality comparison of the SequenceMatcher

If FULL is returned then no deltas could be constructed this could be represented as a replacement then but instead of creating a replacement for all of the unmatching, the end results can be found and then any that a delta couldnt be constructed for can be grouped together as a replacement

we are only storing the delta information the operation data of indexDelta needs to be the index of the change combined with the type and delta list this works for bi and uni directionality actually index should be at the end to construct consistency

to reimagine the examples using these decisions ( assuming bidirectionality):

4
6 ( 4, 5, 6, 7, 8, 9)
3 ( 4, 5, 6)

deltas found for 6,
so:
	replace( [ a, s], [ d, f], 4)
	indexDelta( deltas from q[ 6] to w[ 6], 6)
	delete( [ a, s, d], 7)


7
3 ( 7, 8, 9)
	0, 1, 2
5 ( 7, 8, 9, 10, 11)
	0, 1, 2, 3,  4

deltas found for 7, 8,
so:
	indexDelta( deltas from q[ 7] to w[ 7], 7)
	indexDelta( deltas from q[ 8] to w[ 8], 8)
	replace( [ a], [ d], 9)
	insert( [ a, s], 10)


11
2 ( 11, 12)
2 ( 11, 12)

deltas found for none,
so:
	replace( [ a], [ d], 9)

so we give the SequenceMatcher the two sequences as they are, state ( g) The op codes that it returns have indexes into the two sequences ( g) passed into the deltaConstruction function We want the delta transforms to be reffering to a transform of the data at the state ( A) it will be in just before applying that transform That means that the sequences will have been modified by the already constructed delta transforms in the sequence matcher delta transformation set So in order to continuously transform the two sequences into ( A) we shift the index which is stored in q0Shifted This index begins at the same location for both q and w in ( a) although in ( A), q and w are not the same

i ( g) [ 3, 44, 565, 4, 5, "tree", 32] | | [ 78, 54, 4, 3, 44, 77, 6, "tree"] | |

op q0 q1 w0 w1 q0Shifted

insert 0 0 0 3 0 forwards ( a): [ 78, 54, 4, 3, 44, 565, 4, 5, "tree", 32] equal 0 2 3 5 - replace 2 5 5 7 5 forwards ( a): [ 78, 54, 4, 3, 44, 77, 6, "tree", 32] equal 5 6 7 8 - delete 6 7 8 8 8 forwards ( a): [ 78, 54, 4, 3, 44, 77, 6, "tree"]

the replace operation could be recusively calculated the replace section is marked using | at site i 565 is compared with 77 for deltas 4 is compared with 6 for deltas 5 is deleted

in order to construct the delete: we need the state a start, the q state g range state a start: q0Shifted+ wParticipatingLen, state g q start: q0+ wParticipatingLen, state g q end: q0+ qParticipatingLen,

and for insertions: an altered example [ 3, 44, 565, 4, "tree", 32] | | [ 78, 54, 4, 3, 44, 77, 6, 5, "tree"] | |

state a start: q0Shifted+ qParticipatingLen, state g q start: w0+ qParticipatingLen, state g q end: w0+ wParticipatingLen,

I want the ability to record within or alongside the code, about potential changes which can occur to that code,

An example is that I wish to signal different places where potential systems could be used. I then want to be able to aggregate all of those so that I can judge the impact of that new system.

Would markdown referencing style like: [Referenced place](Referenced%20place.md) in comments be sufficient? Well I dont think I have software that can parse this, If this format is followed then I can use a tool such as grep later

Things that felt important to remember and can be expanded if need be

Non exhaustive

Percentages are expressed with ( 0 representing a complete unfufilment of a whole) and with ( 1 representing a complete fulfillment of a whole) The concept of a whole is problematic but the number 1 is a complete representation of a whole and is therefore less arbitrary than 100 Much of number based systems revolve around manipulations of a whole, a whole is a very concrete and easy to find concept, it is just all. Maybe stemming from the oneness of the self, much of language thought is based off of wholes. Not that this is the only and optimal way of structuring the mind This makes 1 less arbitrary than 100, 1 can also be easily used in mathemtical operations such as division without transformation

Different methods of combining a set of boolean values to form a singular boolean values are referred to by their full explanation and not by names of logic gates. Referring to such things as logic gates reinforces the idea that those operations are special and somehow fundamental

Multiples of measurement by 1000 found in the commonly reffered to metric system are not referred to with the language prefix as this causes the need to invent words unique to each natural language even if the 10 arabic symbols are shared between users of the metric system Instead x10^n notation is used to directly explain the relationship of "quantity unit^n" notation is used where 0 is the original e.g. "5 metres^3" not "5 killometres" unsure whether to still hold to multiples of 3 or not, needs to be determined before use

Breaks are made to improve upon concepts and also to reinforce the idea that reality is not held in common but is subjective and it is not always useful to approach common understandings especially when this rigidifies broken systems

I was thinking about how to use sysml using python models but much of it involves object modelling as can be done in python so python itself can be used. sysml is geared towards different types of people interacting with design and not towards a description which can be parsed by a computer sysml is generic

Modelica seems closer to what I want to be doing, a method of describing systems which can then be picked up

Complex general language description is preferable then if common interfaces can be found

I plotted out an example system using sysml to describe the water collection system that ive built When modelling, not nescessarily in sysml I am currently aiming to describe a system in general and not just production processes. On reflection, systems encode much complexity I think we need a discussion about what to include and what not to Aspects of systems that need to be designed

Water collection sysml example

Rainwater capture diagram

Taken from the comments in the document

Need to describe this system enough to be able to determine if the second statement is true:

  • This system this system supports one persons entire water supply
  • This system will support another person

To know that the system supports one persons water we would know what that persons water requirements were in ( quantity per time span) and also know that this system is able to provide a max ( quantity per time span) output >= that persons requirements.

In order to determine the second statements truth we would need to be able to calculate the max ( quantity per time span) of the retrieval stage this should then be >= the sum of all of the peoples requirement rates. The time span should not exceed either the minimum of either persons time span values or a specified value as to avoid ill health caused by prolonged lack of water

We are dealing with storage here which is more complex than not having storage. If we didnt have storage then provided people consumed at the time of rainfall then the q/ t would equal the rainfall q/ t minus any loss in processing. Ignoring the last storage, if there was storage then the retrieval q/ t would still equal the rainfall q/ t minus the processing loss but q/ t is lost if the tanks are allowed to overflow.

So the question of whether it supports the new increased q/ t demand, we are lacking data about what is acceptable consumption habit

In order to determine the max q/t of the output we could either state in our model that the q/ t of the retireval output is the same as the q/ t input of the rainwater. We could also state within the action blocks the relationship between the inputs and outputs.

this is a good case for the need for future expansion of a modelled system

This was possible using the production process model. Storage however wasnt possible to model there. Using these relationships of i/ o in actions we can trace that back to the first step. Currently it would seem like we were dependant on the large contaminants but we are not. Well in the production process example,

We need some way of determining the amount of rainfall. We said that this could be a yearly probability chart

The rain falls at a q/ t rate for a specific area. so its ( q/ t)/ a Lets say that our model here states that our input is in this format.

So we have stated that this system takes rainfall at a certain measurement type as an input. I dont want to encode how the rain will fall within this description and I want to be able to update the description of how rain falls over time so it would make sense then for this description to reach out and say that this input value is controlled by the output of the system known as "Rainfall". An issue is that the amount of rainfall is dependant upon the location. So, however we have modelled the rainfall system, in order to determine the rainfall we would need to know our location, we could pass the value of our systems location to the rainfall input which would be set at composition time maybe, ( we could state that this system is always in one position ( lame)), we could store the location of our system component ( here that would be the capture action) and then we can use that to determine the location of inputs. Perhaps these locations could be defined as offsets to the systems coordinates.

rainfall is a movement of material and happens all across the planet atmosphere dont want to have to simulate or do operations upon the whole planet wide rainfall system and only want to consider relevant

in the same way that rainfall would be beneficial to model a very large scale system and to avoid the computational issue-s that come with that, perhaps one can model the entirity of a specific reality within the same model and can then take steps to avoid computational cost-s of storing and interacting with the model Although this may require a specific modelling format and not custom implementation I was thinking it could be divided by location in physical space but a single location in space could contain a large amount of detail, a better dividing method would be to divide or delay load based upon factor-s more related to computation such as the detail or any relevant measurable factor, if individual components are involved in modelling then one could use a delayed load structure, perhap called lazy load and implemented in lTSD.getData Lazy load could do batch request-s and fan out from the site of the requested amount with a set batch size would the modelling be generic or any method using a custom interface if a loss is had from inspectablility, perhaps this inspectability could become part of the interface querying the movement at a certain part of it may be needed this is catagorised as ( ( a model of material organisation), ( contained within the real))

Physical location of described systems- predefined- entered in composition- other solution- q

in both the description of the existance of this system and in description of rainfall one is modelling the prescence and arrangement of material in reality and and therefore both should be part of the reality model

The discerning body can take our rainfall value, either use a directly stated relation or work up the chain and then find the ( q/ t)/ a of the rainfall, and the a of the metal to find the rainfall quantity


Mentioned pre existing computer planning techniques: - Yra fertiliser - Cetis agrochem - Bayer agrochem - Cf industries fertiliser

He mentioned that these were just techniques employed to match products to consumers

Mentioned that rac wasnt good and that one should look at Harper adams He mentioned an arable production broad course

Im not sure how to interpret their answers as they were very offput by computing and seemed strongly guided by feelings of tradition

Transaction process

writing of events that occured during creation of this document

Saving function format

def lTSDSavingDecider(
	lTSD: LongTermStorageData,
	retrievalLocation: int,
	retrievalSequenceLength: int,
	attemptedSaves: list[ int], # A list of the retrieval index-s that attempted to save
	sucsesfulSaves: list[ int], # A list of the retrieval index-s that resulted from a sucsesful save
	priorityLocation: float, # The priority value passed with the save request
	deleted: bool,
):...

Different event operation upon observation

Auxhilllary description info

Deletion handling

This occurs for a single process, dataId the instanciatedData for the given dataId in the given process is sent the deletion event and has the deleted state set to true

Pending deletion cleanup

the process can pop through all ( pending deletion)-s for the process and perform for each

Newly created info data

{
	"numberOfRetrieval-s": 1,
	"attemptedSave-s": [],
	"successfulSave-s": [],
}

Upon process loading of store

Increment the process held count of the store by 1 instead the process count can be encoded by creating a set of ( pending deletion)-s for each process this dictionary~s locator-s must be unique to the process amongst all interacting process-s this can therefore not be the pid as multiple machines can interact it can be { UUID4}_{ PID} -> ca662dd70d1048b7ae40e70976dc5a73_3012 in the lTSI.__init__

Upon load

perform Pending deletion cleanup

Check for the info of the piece If info present: There will be data so retrieve it the retrievalIndex is set to the current "numberOfRetrieval-s" the number of retrieval-s is incremented Else: If no data in the store then raise the unpresent error Else create the info entry, set the retrievalIndex to 0, retrieve that data Newly created info data

in the loadings of lTSI.retrieveData

Upon save

saving function retrieval sources: "lTSD": the saving lTSD "retrievalLocation": stored on the saving lTSD "retrievalSequenceLength": from the info specific to the dataId "attemptedSaves": from the info specific to the dataId "sucsesfulSaves": from the info specific to the dataId "priorityLocation": passed along with the save function "presentInStore": check if the data is present in the store

perform Pending deletion cleanup any saving instance will have info present so retrieve the info

"attemptedSave-s" on the dataId info has the retrievalIndex appended the save function is run using the retrieved input-s and returns a boolean result if the save function passes: the data is converted to the store format and saved if it is versioned then the current version creation process occurs and is saved the lTSD instance~s retrievalIndex is updated to the newest version and the "numberOfRetrieval-s" on the info is incremented by 1 "sucsesfulSave-s" on the info has the new retrievalIndex appended else: no op the info must be saved the boolean result is returned

in lTSD.save or possibly lTSI.[writeNew...] lTSD.save

Upon test save

perform Pending deletion cleanup the save function is run using the retrieved input and the boolean result is returned

in lTSD.testSave

Upon reload

perform Pending deletion cleanup the info will be present in the store so load it in if it is deleted then one cannot reload the data if the data is unpresent then one cannot reload the data and an unpresent error can be raised, the current lTSD will not be in memory if it were not either present or deleted in current store scoped session so if deleted then it is unpresent, if undeleted then it is present otherwise retrieve the data and tags from the store and update the instance update the lTSD~s retrievalIndex to the newest version and increment info[ "numberOfRetrieval-s"]

in lTSD.reload or lTSI._reloadData lTSI._reloadData

Upon deletion

POSTPONED NOT DONE, WAITING ON BULK MANIP, MUST HAPPEN SOON perform Pending deletion cleanup The data is removed from the store the dataId is added to the pending deletion set of every process but the deleting process perform Deletion handling for the currently deleted dataId

nope-> in lTSI.deleteData as it does not require data to be instanciated instead-> lTSI.prescenceManipulation

dataId must be appended to the deletion set of all other process-s holding that store All instance-s in the deletion initiating process are notified by signal and have their presentInStore state set to False Other process-s must react too by sending signal-s and updating state-s GO THROUGH ALL EVENTS AND MAKE SURE THIS IS BEING CHECKED FOR IN OTHER PROCESS-S This is different to data that is not present, and must be dealt with differently this can be signified by the info this also means that info cannot be stored upon the data itself as during deletion the piece data is removed but the signifying info needs to be present this also means that deleted data can recieve the normal input-s so in the initiating process, info[ "deleted"] can be set to True so instead of other process-s checking for ( the piece they are interacting with)~s deletion state, to get a quicker response to any deletion, the other process-s can check all deletions, perhaps a set of dataId-s can be kept for each process and then when a process comes to perform an operation, it can check for any id-s in this list and emit the ( deletion event)-s and set the ( deleted state)-s for all of these this could be handled in prescence maniupulation

Upon transfer to another process

POSTPONED there is info present as it has to be loaded in at least one process so upon transfering the data the numberOfRetrieval-s must be incremented it is assumed that the initial instance remains intact and seperate if this were referring to shared memory then they are treated as a single instance the containing stores must be ensured to be present with in the recieving process as transferal and store creation is not yet designed i dont know which process this should occur in but it must occur only once in the sending process the deletion event m

perform Pending deletion cleanup in the source process if the sink process is new there is no need for Pending deletion cleanup although looping through an empty set is not expensive, perhaps there would be undesriable communication time if the sink process is not new then perform Pending deletion cleanup

in custom multiprocessing potentially, maybe in DataForTypes and certain types can have special functionality when instances are being transfered to another process lTSI and lTSD in this case or manual function to copy over data correctly

the retrievalIndex of the new instance lTSD must be set to the current numberOfRetrieval-s and the numberOfRetrieval-s must be incremented

Upon creation

perform Pending deletion cleanup There will be no info present so blank info must be created the retrievalIndex is set to 0

could unify this and loading this does incur an info retrieval cost for aspect-s which solely create, although mass creation is not seemingly a huge occurence, this is noted here and can be altered upon slowdown in that case, creation can specify solely creation

in lTSD.__init__ or lTSI._dataInit

Upon process exit

perform Pending deletion cleanup remove the ( pending deletion)-s process entry from the store if the ending process is the final one in the pendingDeletion-s then the aspect should clear all info too to save space

in lTSI.instanceExitCleanup which is called currently by lTSII instance, storeInterface-s

( Happening sequence)-s

Happening sequence „

--Confined to any process-- --no prior operations in process-- --begins in process ^--

  • Q Upon load -> instance ſ of piece 5: creates info entry for piece 5 with the data:

    {
    	"numberOfRetrieval-s": 1,
    	"attemptedSave-s": [],
    	"successfulSave-s": [],
    	"deleted": False,
    }
    

    retrievalLocation on the python lTSD object is set to 0

  • W Upon load -> instance ¶ of piece 5: edits the entry for piece 5: numberOfRetrievals+= 1 retrievalLocation on the python lTSD object is set to numberOfRetrievals before incrementation

  • E Upon test save of instance ſ: the loaded saving function of the python instance of the store is used and is called with a priority between ( 0, 1) retrivalLocation is obtained from the python lTSD object numberOfRetrievals is obtained from: info[ "numberOfRetrieval-s"] attemptedSaveLocation is obtained from: info[ "attemptedSave-s"] successfulSaves is obtained from info[ "successfulSave-s"] priotyLocation is obtained from the calling test function | the save function returns a boolean as to whether the save will commence

  • R Upon save of instance ſ: the information will be collected and passed to the saving function if the saving function will return a boolean value as to save if sucsesful: the memory instance data is converted to the store format and saved, ( unversioned replaced| version constructed and appended) info[ "successfulSave-s"] is incremented the caller recieves True elif not sucsesful: the caller recieves False info[ "attemptedSave-s"] is incremented

  • T Upon save of instance ¶: as R

  • Y Upon load -> instance ŧ: as W

  • U Upon save of instance ¶: as R

  • I Upon save of instance ŧ: as R

  • O Upon reload of instance ¶: retrievalLocation on the in memory lTSD is set to the current length data is retrieved from the store and used to update the data and tags field of the LTSD info[ "numberOfRetrieved"]+= 1

  • P Upon save of instance ŧ: as R

  • A Upon save of instance ¶: as R

  • S Upon save of instance ¶: as R

--path „--

  • D Upon deletion of piece 5: Those connected to the delete signal are notified A deleted response is given to any who would save after deletion, well some would allow for code to be written agnostic of saving method, they will have to react to deletion in some manner, either by passed reponse or by language error handling A deleted state can be queried upon all instanciated lTSD's of a deleted piece
  • F Upon reload of instance ŧ: The requested data is no longer present in the store, either a deleted response is returned or a handlable error is raised maybe simply a not present in store response and no deletion response A response is fine, an error must be designed around more specifically to prevent program halt, a response will not halt by default The ability to give a non halting response which can be very simply converted to a halting response would be good For now it can return a enum
  • G Upon save of instance ¶: as R

--path ¢--

  • D Upon transfer to another process process m: the instance now within the other process is now a new instance and is given the same treatment as loading of the piece

  • F Upon creation of piece 8 in process m: the retrieval index of the data is set to 0 info is created in the store

  • G Upon deletion of piece 5 from process m: those in process m are sent the delete signal all ltsd instance-s in process m have the deleted state set upon them

  • H Upon load of piece 5 from process ^: the data can be presented as not present

  • J Upon process exit the last process must know it is the last process accessing the store and so the store must store all connected process-s remove the info for each piece

merge all concepts into common operation for the header-s include handling of events that may have occured in other process-s include design of what should occur with versioned data my hunch is that it should be specific to the dataId not the versionId as we are conceptually constructing a single item concepts may have to be altered however

Saving function ←

def firstRequestSaves(
	lTSD: LongTermStorageData,
	retrievalIndex: int,
	retrievalSequenceLength: int| None,
	attemptedSaves: list[ int]| None,
	sucsesfulSaves: list[ int]| None,
	priorityLocation: float,
	presentInStore: bool,
)-> bool:
	if not presentInStore:
		return False
	if len( sucsesfulSaves)== 0:
		return True
	if retrievalIndex>= sucsesfulSaves[ -1]:
		return True
	return False

Scoping-s

Scoped to a single store using unversioned data

using Saving function ← blank

  • Q: r: 1, a: [], s: []-> 0
  • W: r: 2, a: [], s: []-> 1
  • E: no transform
  • R: r: 3, a: [ 0], s: [ 2] 0-> 2 the saving function will return True: info saved, info component-s incremented
  • T: r: 3, a: [ 0, 1], s: [ 2] nothing saved 1< 2
  • Y: r: 4, a: [ 0, 1], s: [ 2]-> 3
  • U: r: 4, a: [ 0, 1, 1], s: [ 2] nothing saved 1< 2
  • I: r: 5, a: [ 0, 1, 1, 3], s: [ 2, 4] 3-> 4
  • O: r: 6, a: [ 0, 1, 1, 3], s: [ 2, 4] 1-> 5
  • P: r: 7, a: [ 0, 1, 1, 3, 4], s: [ 2, 4, 6] 4-> 6 // This should be able to save, the philosophy of Saving function ← is that a save is permitted if it is the first in it~s "group" and after all those in the group are invalid // A reload of an old piece to the current latest data, entry into the current valid "group" does not constitute invalidation of others in the current "group"
  • A: r: 7, a: [ 0, 1, 1, 3, 5], s: [ 2, 4, 6] 5-> 6
  • S: r: 8, a: [ 0, 1, 1, 3, 5, 6], s: [ 2, 4, 6, 7] 6-> 7 --„--
  • D: blank deletion of info
  • F: blank
  • G: blank function ← means no saving post deletion and so --¢--
  • D: r: 9, a: [ 0, 1, 1, 3, 5, 6], s: [ 2, 4, 6, 7]-> 8
  • F: r: 1, a: [], s: []-> 0
  • G: blank all info

Scoped to a single store using versioned data

using Saving function ← blank

  • Q: r: 1, a: [], s: []-> 0 create info entry for the dataId the constructing versionId is different to the loaded versionId
  • W: r: 2, a: [], s: []-> 1 the constructing versionId is different to the loaded versionId and the versionId of instance ſ
  • E: no change return True
  • R: r: 3, a: [ 0], s: [ 2] 0-> 2 sucses! ( instance ſ)~s retrievalIndex is changed to 2, it~s loaded versionId is the previous constructing and the constructing versionId is newly generated
  • T: r: 3, a: [ 0, 1], s: [ 2] failure
  • Y: r: 4, a: [ 0, 1], s: [ 2]-> 3 new instance loaded in with loaded versionId of last saved so the instance that was retrievalIndex 0
  • U: r: 3, a: [ 0, 1, 1], s: [ 2] failure
  • I: r: 5, a: [ 0, 1, 1, 3], s: [ 2, 4] 3-> 4
  • O: r: 6, a: [ 0, 1, 1, 3], s: [ 2, 4] 1-> 5
  • P:
  • A:
  • S: --„--
  • D:
  • F:
  • G: --¢--
  • D:
  • F:
  • G:

Added aspect-s that ( store interface)-s need to implement

  • lTSI._createPendingDeletionsEntry(): unique pid creation can be left to implementation, they can use getInstanceData( "uniqueProcessId")

  • ( Upon load, # Newly created info data) should be dealt with in the implementation~s lTSD retrieval mechanism in the cursor retrieved by retrieveData pending deletion cleanup is handled by the retrieveData implementation wrapper

  • lTSI._pendingDeletionCleanup()

  • lTSI._handleDeletion( dataId: str)

  • lTSD._retrievalIndex

  • lTSI._retrieveSaveFunctionInput( dataId: str)-> anottated coolly

  • lTSI.savingFunction need to ser function-s fine

  • lTSI._updateDataInfoWithSaveResult( originalRetrievalIndex: int, sucses: bool) should append the original retrieval index to info[ "attemptedSave-s"]. If sucses then info[ "numberOfRetrieval-s"] should be incremented, the value before incrementation should be ( appended to info[ "sucsesfulSave-s"], returned), it not sucses the originalRetrievalIndex should be returned This does not utilise other function-s as to avoid repeat transaction-s

  • lTSI._getLTSDComponents should raise and unpresent error if so

  • lTSI._incrementRetrievalCount( dataId: str) needs to increment info[ "numberOfRetrieval-s"] and return it~s value before incrementation

  • lTSI._createNewDataInfo( dataId: str) creates the new info entry according to this

  • the cursor in lTSI._retrieveDataStoreSpecific should either create or update info for new lTSD's

  • lTSI._createPendingDeletionsEntry() append-s the store pid to the store known

DB IMPS

mongo can have a collection called sessionInfo this can have the unique pid-s of proc-s with pending deletions along with the info for each data id, perhaps the deletion-s can be a document and then all info-s are their own document

no there are deletion-s for each proc so an id per proc in one collection then an id per per data in another document

so fsjson can do similar with proc deletion-s, info for data can be done in a similar manner this requires three folder-s of file-s

All same result-s as unversioned, specific to dataId not versionId

should transaction history be wiped ever? the possible may be needed to be accounted for so 32 bit int fairly big but wipe after a session to be sure or wipe on startup if too big if no shutdown callback can be derived

what should happen upon a save, does a save reset a transaction state in mark~s example the pulls made before the first save can no longer save this signifies an end of a transaction session when a transaction session

when an aspect requests to save the save function is run, currently there is only python, this save function can be stored as a symbol reference

retrievalLocation -> obtained from mem object retrievalSequanceLength -> obtained from store savingLocation -> obtained from store priorityLocation -> obtained from save call deleted -> obtained from store:

I want to index container types using a generic form of indexData and have that index desire be valid in multiple contexts

Given tuple[ containerType, indexData]: 9= I want to index into an instance of containerType in python memory 5= I want to index into an arbitrary ConversionSet serialised format, many of these inherit behaviour from each other so they can copy the indexing definitions from another and edit them

Considering that object instances with variables stored upon them are containers, I also wish to index into these Given that a very large portion of python objects follow this format, it is not useful to specify the type here which could be left as None The rest of in memory indexing of container types seems to be done using square brackets although some may involve a specific function. I may implement a container type which uses container.get( index), i may implement a complex one which uses container.getRecursive( mode: Mode, index0: int, index1: externalLibraryType)[ index2] So at least within python memory we have to support ( d= ( o.i| getattr( o, i)), f= o[ i], g= A custom function) So if we are passing none type then we can use d, if passing a non custom type then we can use f when describing a custom type, we may want to use g or a variation of d, f for g we need to reference a function, state where the maybe the custom definition in python should just be a function which takes the instance and input data and returns the rertrieved data would this play nicely with other forms of retrieval? well we could follow through with the getRecursive scenario

GenericTypeConversion uses solely [] indexing. Converted instances of python objects are accessed by instance[ attributeName] due to the behaviour of getDictFromObjectInstance Converted instances of python objects using custom conversion mechanisms need to also define how to convert the same indexData into the new They could define a function which takes the instance and the indexData however we need to be able to translate this into a format perceptable by the database, with mongodbs dot notation being a goal here. So maybe it can return a list of items, each prefaced by an enum describing the indexing method, i will write an example for tuples in the GenericTypeConversion

Examples

class IndexType( IntFlag):
	MEMBER= auto()
	SQUARE_BRACKET= auto()
	CUSTOM_FUNCTION= auto()

@dataclass
class IndexData:
	indexIntoIndexData: int| None| slice= None
  • generic object instance: passed index: ( None, "tree") in memory indexing definition: implied indexing of [ [ MEMBER, IndexData()]] GenericTypeConversion indexing definition: indexing of [ [ SQUARE_BRACKET, IndexData()]] mongodb interpretation: ".tree"

      - If we wanted the indexed object to be referred to as a list instead of an object instance we could pass `( list, "tree")` as the index
    
  • fallback: passed index: ( unknownType65, ( scrambleyammo)) in memory indexing definition: implied indexing of [ [ SQUARE_BRACKET, IndexData()]] GenericTypeConversion indexing definition: passthrough indexing to in memory indexing definition mongodb interpretation: ".{str(( scrambleyammo))}" - use string conversion in Utils

  • tuple: passed index: ( tuple, 9) in memory indexing definition: none needed as it complies with the fallback GenericTypeConversion indexing definition: taken from the custom conversion definition [ [ SQUARE_BRACKET, "tuple"], [ SQUARE_BRACKET, IndexData()]] mongodb interpretation: "tuple.9"

complex scenario conversion function


class ComplexType:
	getRecursive( mode: Mode, n: int, tr: externalLibraryType):...

	@classmethod
	getRecursiveFromDictRep( dictRep: dict, mode: Mode, n: int, tr: externalLibraryType):...

def getComplexTypeIndex( instance: ComplexType, indexData):
	return instance.getRecursive( indexData[ 2], indexData[ 0], cheese)[ indexData[ 1]]

def getComplexTypeIndexFromDictRep( instance: dict, indexData):
	return ComplexType.getRecursiveFromDictRep( instance, indexData[ 2], indexData[ 0], cheese)[ indexData[ 1]]

  • complex scenario: passed index: ( ComplexType, ( 55, "cheese", MODE.FORWARD, "nice!")) in memory indexing definition: [ [ CUSTOM_FUNCTION, getComplexTypeIndex, IndexData( slice( 0, 3))], [ SQUARE_BRACKET, IndexData( 3)]] GenericTypeConversion indexing definition: despite me currently not being able to think of a use case this could be defined as [ [ CUSTOM_FUNCTION, getComplexTypeIndexFromDictRep, IndexData( slice( 0, 3))], [ SQUARE_BRACKET, IndexData( 3)]] mongodb interpretation: this is unable to be represented within mongodb dot notation as python code cannot be called there

  • dict: passed index: ( dict, "cheese") in memory indexing definition: none needed as it complies with the fallback GenericTypeConversion indexing definition: none defined so it passes through to the in memory definition mongodb interpretation: "dict.cheese"

A function can then be written to interpret

def returnContainedWithContainedDescription( container, containedDescription: None| slice| Any):
	if containedDescription== None:
		return container
	elif isinstance( containedDescription, slice):
		start, stop, step= containedDescription.indices( len( container))
		return container[ start: stop: step]
	else:
		return container[ containedDescription]

# Maybe we can call a flatten function first which substitutes the indexData in
# Maybe this could be made generic so there isnt any logic tied to the indexing method, we know that the first member of each index operation is the index operations enum
def flattenIndexingDefinition( objectInstance, indexData, indexingDefinition):
	for level in indexingDefinition:

		if level[ 0]== IndexType.CUSTOM_FUNCTION:
			indexDataIndex= 2
		else:
			indexDataInsex= 1

		if isinstance( level[ indexDataIndex], IndexData):
			currentLevelFinalIndexData= returnContainedWithContainedDescription( indexData, level[ indexDataIndex].indexIntoIndexData)
			level[ indexDataIndex]= currentLevelFinalIndexData

	return indexingDefinition
				

def interpretIndexingDefinition( objectInstance, indexData, indexingDefinition):

	indexingDefinition= flattenIndexingDefinition( objectInstance, indexData, indexingDefinition)
	returned= objectInstance

	for level in indexingDefinition:

		if level[ 0]== IndexType.MEMBER:
			returned= getattr( returned, level[ 1])
		elif level[ 0]== IndexType.SQUARE_BRACKET:
			returned= returned[ level[ 1]]
		elif level[ 0]== IndexType.CUSTOM_FUNCTION:
			returned= level[ 1]( returned, level[ 2])

	return returned
		

maybe saving custom index data for a conversion can be related to the type being converted to,

need a way to override both fallback and object instance indexingDefinitions

need a way to state that a type is not indexable in a specific domain

need a way to state that a domain does not support custom functions, maybe also to state that it doesnt support other definition types

so I need a way of potentially "splitting" the indexingDefinition So on the JsonCompliantDictionary, we are inheriting the fallback definition for dict, well we can set up a system which changes the definition depending on the passed key. Here we could use that to say that keys that are not instances of str return a NONE value or something like that

what about interpreting the definition, how does this solution work with interpreting the definition
also what of interpreting the instance being relevant

i think maybe we just care about the key
if this is the case then we need a way of describing whether it is key dependant or not, maybe this can be combined with the not supported declaration to obtain 3 different enum values

Generalisability

So the current design is to describe indexing into containers, it is specific to python currently due to the usage of p= ( members, square brackets, custom functions) We can transform between different representations of that data but we are still modelling using the python conventions of p. What is needed to transfer this understanding to other non python understandings of the nature of indexing is to transform between instances of the "thing" that p is I was going to do a manual transformation from the python format into mongodbs dot notation format but i could use this mindset to do the transformation. These could be called indexing method collections as each one can provide its own set of indexing methods So if we are defining different representations of data as in ( python memory instances, python dict rep of that same data, python json compliant dict rep of that data) is this different enough to describing another instance of my brain is bad rn i have runny nose and pounding brain rrrrrrrr let it be known that i still think discrete thought is a bottleneck, all natural language is bad, dont get too sucked in to the game I dont think the other shit matters enough right now, good large scale code alteration tools are needed for big changes in the future, increase the rate of production instead of just producing like a dumb dumb, yeab good luck getting to the moon with that hoe, fucker So ignore this

What of indexing into delta transform sets,

Do this, mongos dot notation and possible tabular or sql indexing form a need for implementing # Generalisability Maybe just discuss indexing into delta transform sets first And also get other design such as that discussed in # Code that needs to be written done first to reduce mental load

Indexing into delta transformation sets as an IndexingMethod makes sense as it is data An issue with using them to construct new versions without loading the versions into memory is that one isnt checking the validity of the index path and so we would have to check for and either prune or disregard invalid delta transforms upon loading

So we are given a sequence of index s. Each of which are in the form tuple[ type, indexData] and it would be useful to transform this into a sequence of delta transform operations. The delta transformation set would be decided by the index s type. And the delta transformation set would have to declare ( that it is a container type, the specific operation that will be used, the mapping from the index data to the operation data) complex behaviour may be required to do this so maybe the conversion set can optionally support it and then provide the relevant information through the return value of a function

the description format could be just the operation there is no need for index data placeholder ( SequenceMatcher, ( "change", ( relevantData)))

to obtain this we must calculate the index. we cant convert from the python method as we need the python type information

Code that needs to be written

Aspects associated with the python indexing method can be marked with ^

Need to define the data format used to model within using MutableSequence s as well as defining the IndexData placeholder ( ^) Need to define the different python contained obtenance methods: MEMBER, SQUARE_BRACKET, CUSTOM_FUNCTION ( ^) Need to write associated deconstruction method, interpretIndexingDefinition above ( ^) Need to specify the in python loaded set with defaults for ( types with members), ( fallback). Need to be able to specify custom behaviour for in memory indexing here too Need to be able to specify an arbitrary number of different indexing contexts all of which have their own definitions of both special types, ( MEMBERED_INSTANCE, FALLBACK), and custom types. These could either use indexing systems to inherit definitions or they could use a plethora of methods to do so the latter seems best as they may wish to inherit from differing places It seems like IndexingContext s should implement custom functionality to populate their Mapping[ type| Special, IndexingDescription]

These are all inherently associated with a `IndexingMethod`

obtenance methods are defined on the possible `IndexingMethod`
how about special types? Are they on the `IndexingMethod` or the `IndexingContext`?
Maybe the designation that designated ( `MEMBERED_INSTANCE`, `FALLBACK`, other) would find some reason in the future to designate differently depending on the known information at the time
It does seem specific to python, I fall with just placing it within the `IndexingMethod` currently
BECAUSE SPECIALS NEED TO BE PASSED IN THE INDEX INSTEAD OF A TYPE THEY NEED TO BE UNIVERSAL, NO THEY DONT, FALLBACK IS DECIDED BY THE INDEXINGMETHOD NOT PASSED, MEMBER IS PASSED THOUGH,
HOW TO GET AROUND THIS?
WELL IF WE ARE PASSING A 

Need a way of specifying that an index description for a type is dependant on the passed key, then need to define a function which takes the key and returns a description Need to be able to take ( a sequence of indexs, a context, a possible method ( although the contexts are associated with methods already)) and return the index description. If impossible to find the indexing description, an impassible obstacle in the path. Then the caller needs to be notified. This can also call the custom function should a key dependant description be defined for a type ( ^) Need to be able to interpret the special types as opposed to the custom defined ones. in ^ we need to be given a type or enum and then return the relevant indexing definition using an order specific to ^

How to implement special cases regarding notes above and below

So index sequences passed to resolveIndexSequenceForContext need to be universal regardless of the indexing method that is tied to the passed indexing context

The same combination of types and indexData can then be used within different indexing methods. All of the types and type data passed are inherently python related so Not nescessarily, a type could be passed which represents a type in another system, data passed could be representative of another systems data too this is similar to how an indexing method other than the default python one is representing another systems method of indexing into container data

Therefore, ( as long as this indexing system is confined to pytohn) although we are always passing python data, because python is a general modelling language, we can pass things that represent data which doesnt conform to pythons specification, this can be used to represent type s, indexData s, special type s. That conform to perhaps a databases specification
Maybe indexs passed need to be specific to the `IndexingMethod`, yes i think
	Also the python method could search for a __getitem__ method on unknown types instead of needing to specify a special
	well i was against making them specific but i think it makes sense to make them specific

And so where does this leave my desired ( python method, jsoncompliantdictionary context)-> ( mongo method) conversion
well i need to find the jsoncompliantdictionary description of the given index and then it needs to be passed to a conversion function from ( python method) to ( mongo method).
a= This can be don with a function: ( indexSequence, context, destMethod)-> description
s= it can be done with two:
	( indexSequence, context)-> description
	( description, startMethod, destMethod)-> description
d= or it can be done this way:
	( indexSequence, context)-> description
	found, conversions= getDataForType( "indexingDescriptionConversion", startMethod)
	if found:
		if destMethod in conversions:
			conversions[ destMethod]( description)

i think s is best

So given the above indent, special cases and indexing in general should be IndexingMethod specific



# The first element is the data used to determine how one will index into the data and the second element is data which is used to index into the data
# Methods can support any type here, what the method will listen for is referred to in the Methods, `NotableIndex` variable
Index= Seqeunce[ Any, Any]


@runtime_checkable
class IndexingMethodSupportsDefinitionApplication( Protocol):
	def interpretIndexingDefinition( objectInstance: Any, indexData: Any, indexingDefinition: IndexingDescription)-> Any:
		...

class IndexingMethod( Protocol):

	def resolveIndexSequenceForContext( indexSequence: Sequence[ Index], context: ForwardRef( "IndexingContext"))-> tuple[ bool, Sequence[ IndexingDescription]]:
		# Dont currently know if we are taking just one sequence or a tree structure i think just a list of sequences would suffice in a situation where multiple operations must be performed

		# Need to get the description using the IndexingMethod
		# Need to call any key dependant descriptions
		...


class IndexingContext( Protocol):

	indexingMethod: IndexingMethod

	
def convertIndexingDescriptionBetweenMethods( indexingDescription: IndexingMethod.IndexingDescription, sourceMethod, destMethod)-> IndexingMethod.IndexingDescription:
	...

# DataForTypes entry:
indexingMethodConversions: dict[ IndexingMethod, Callable[ [ IndexingDesctription], IndexingDescription]]



class PythonInternalIndexingMethod:

	
	IndexData= Any
	# The IndexData used as a placeholder in descriptions can be renamed to IndexDataPlaceholder
	NotableIndex= tuple[ type| None, IndexData]
	# The boolean represents 
	SingleIndexingDescription= tuple[ ObtenanceMethod, Any, ...]
	DefinedIndexingDescription= tuple[ Literal[ False], SingleIndexingDescription]| tuple[ Literal[ True], Callable[ [ Any], SingleIndexingDescription]]
	IndexingDescription= Sequence[ SingleIndexingDescription]

	class ObtenanceMethod( IntFlag):
		MEMBER= auto()
		SQUARE_BRACKET= auto()
		CUSTOM_FUNCTION= auto()
	def interpretIndexingDefinition( objectInstance: Any, indexData: Any, indexingDefinition: IndexingDescription)-> Any:
		...
	def resolveIndexSequenceForContext( indexSequence: Sequence[ Index], context: IndexingContext)-> tuple[ bool, Sequence[ IndexingDescription]]:
		...

class PythonBaseRepresentationIndexingContext:
	indexingMethod= PythonInternalIndexingMethod

	typeToDescriptionMap: Mapping[ Type| Hashable, IndexingDescription]= # formed from maps provided by type in dataForTypes for the base representation

class GenericTypeConversion:
	# IndexingContext protocol implementation
	
	indexingMethod= PythonInternalIndexingMethod

	typeToDescriptionMap= # formed from maps provided by type in dataForTypes for the dict representation, no inheritance

class JsonCompliantDictConversion:
	# IndexingContext protocol implementation
	
	indexingMethod= PythonInternalIndexingMethod

	typeToDescriptionMap= # needs to inherit from GenericTypeConversion and state a key dependant dict conversion
	
class MongoDBDotIndexingMethod:
	NotableIndex= Sequence[ None, Convertable[ str]]
	IndexingDescription= str
	
	# Dont really need to interpret the index definition here
	# Both a mongodb collection and filter would need to be passed
	# the definition would be used in the project field and the result would be returned
	# def interpretIndexingDefinition( objectInstance: Any, indexData: Any, indexingDefinition: IndexingDescription)-> Any:

	# Just joins all of the indexData together with dots as strings
	def resolveIndexSequenceForContext( indexSequence: Sequence[ Index], context: IndexingContext)-> tuple[ bool, Sequence[ IndexingDescription]]:
		...
class MongoDBDotIndexingContext:
	indexingMethod= MongoDBDotIndexingMethod


def PythonMethodToMongoDBDotMethod( description: PythonInternalIndexingMethod.IndexingDescription)-> Sequence[ bool, MongoDBDotIndexingMethod.IndexingDescription| None]:
	constructedStr= ""
	first= True
	for single in description:
		if single[ 0]== PythonInternalIndexingMethod.ObtenanceMethod.CUSTOM_FUNCTION:
			return ( False, None)

		if not first:
			constructedStr+= "."

		converted= convert( single[ 1], str)

		if converted== None
			return ( False, None)
		constructedStr+= converted
		
		first= False

	return ( True, constructedStr)
	


class DeltaTransformIndexingMethod:
	NotableIndex= Sequence[ type, Any]
	IndexingDescription= Sequence[ DeltaTransformationSet, str, Sequence]
	# Passed data should be a version data tree and not an object to index into
	# Not writing the function now anyway :P fuck you
	# def interpretIndexingDefinition( objectInstance: Any, indexData: Any, indexingDefinition: IndexingDescription)-> Any:

	# An index here is a tuple[ type, Any] where type is a type as in python
	def resolveIndexSequenceForContext( indexSequence: Sequence[ Index], context: IndexingContext)-> tuple[ bool, Sequence[ IndexingDescription]]:
		...

class DeltaTransformIndexingContext:
	indexingMethod= DeltaTransformIndexingMethod

	# Isnt needed as the generating function can be found by resolveIndexSequenceForContext by finding the delta transform set
	# typeToDescriptionMap= 



an issue in the python method may be that when does the caller know whether to pass the type or None if one knows that member indexing is required in all contexts then they can pass None

an alternative would be to change the fallback type per index, maybe specifying maybe fallback behavior for python based upon the prescence of __getitem__ in the passed type we arent converting ( between descriptions) only ( ( index sequence)-> ( description))

it was discussed earlier that custom enum could be used this would require caller knowledge, if no definition is found for the context and no fallback is specified then __getitem__ can be used

Coming from previous design of having all deltas be bidirectional

Current format for operations is tuple[ name, data] this isnt enforced its just what ive been doing

they are stored in a list as tuple[ type| None, list[ tuple[ transformOperationId, transformData]]]

version types can currently either be INIT| FULL| DELTA| FULL_WITH_DELTA if we had unidirectional deltas| we would have INIT| FULL| FORWARDS_DELTA| BIDIRECTIONAL_DELTA| FULL_WITH_BACKWARDS_DELTA

So calculateDeltaData needs to know the desired direction if we wish for a FULL_WITH_BACKWARDS_DELTA save then we desire a backwards delta otherwise we want a normal delta save and so

so the save function takes a desired: FORWARDS_DELTA| BIDIRECTIONAL_DELTA| FULL_WITH_BACKWARDS_DELTA and calls calculateDeltaData with a desired TransformType of FORWARDS| BIDIRECTIONAL| BACKWARDS

calculateDeltaData does some initial checks and returns and then finds a ConstructionSet for the type the ConstructionSet.deltaConstruction takes the two instances and returns ( TransformType, data) they could take a preferred TransformType which is None by default one way they could implement this is that when detaConstruction encounters a situation which could result in branching types of TransformType it uses the preferred Every operation needs to support bidirectional, ( forwards, backwards) but some may store the data in the same manner In this case, deltaConstruction can return bidirecitonal as this signals that the most traversible option is available


deconstructVersionDataChain and applyDataTransform need to change

deconstructVersionDataChain will still return a valid sequence but it will possibly be made of uni and bidirectional deltas when applying data changes in a line now, we will encounter unidirectional deltas

class Directionality( IntFlag):
	BIDIRECTIONAL= auto()
	UNIDIRECTIONAL= auto()

the ConstructionSet s can now have a function called applyDeltas which takes a group of delta transforms of a specific directionality applyDataTransform currently takes typeAndTransforms: deltaTransforms it only needs to take the type once, as it should only ever be passed operations from a singular type so it should take ( object: Any, typez: type, operations: Sequence[ Sequence[ Directionality, Sequence[ Sequence[ transformOperationId, transformData]]]], direction: OperationDirection) and within it should call ConstructionSet.applyDeltas for each Directionality block so ConstructionSet.applyDeltas should take ( object: Any, directionality: , operations: Sequence[ Sequence[ transformOperationId, transformData]])


The design of preferred type passing seems a little weak

We can specify either a transform type or a commit type

Transform type delta s are BIDIRECTIONAL| UNIDIRECTIONAL| NONE| FULL Commit types are those specified above, INIT| FULL| FORWARDS_DELTA| BIDIRECTIONAL_DELTA| FULL_WITH_BACKWARDS_DELTA

constructed bidirectional transforms

when specifying the preferred do we specify q to w? or w to q if no preferred and only bidirectional like before it is implied q to w forwards and w to q backwards so maybe bidirectional needs an application direction of q to w or w to q unidirectional are constructed with a specific direction

i think in some circumstances if the preferred isnt available then we want to fallback to a full change in others we could accept bidirectional in place of unidirectional, unidirectional in place of bidirectional, maybe these desires could be fully specified using a sequence of acceptable outcomes, and if none can be fulfilled then it falls to a full change there is fallback behaviour for no sequence specified

the issue with running it once, determining the output and then rerunning is that the conversion set may have to run extra processes whereas if the order of desired outcomes was known to the conversion function then it could quickly decide on the outcome. maybe then a preference can be specified as well as a fallback value which either falls back to the other directionality than the prefferred and then to full or it directly falls back to a full transform so

class TransformFallback( IntFlag):
	OPPOSITE_DIRECTIONALITY_THEN_FULL= auto()
	FULL= auto()

So LTSD.save can take a preffered version type from these specific CommitType s: FORWARDS_DELTA| BIDIRECTIONAL_DELTA| FULL_WITH_BACKWARDS_DELTA if forwards then we desire a unidirectional delta, if none can be found then we could fall back to bidirectional or full, either one really save could take a fallback value it is currently relativelty unknown where it will be called from to initiate the different commit types, i envision an optimisation process which could reduce load times by constructing FULL_WITH_BACKWARDS_DELTA commits on data that has a large amount of subsequent deltas, bidirectional or forward could be discerned between by a software aspect seeking to optimise each individual version, maybe one that doesnt care about backwards traversing on the data could only use FORWARDS_DELTA

calculateDeltaData can take a preffered delta transform type form these TransformType s: UNIDIRECTIONAL| BIDIRECTIONAL as well as a fallback of any from TransformFallback

DeltaTransformSet.deltaConstruction can take a preffered delta transform type form these TransformType s: UNIDIRECTIONAL| BIDIRECTIONAL as well as a fallback of any from TransformFallback

UNIDIRECTIONAL s are calculated as q to w BIDIRECTIONAL s can be applied as ( q to w)| ( w to q)

So now need to redesign save, calculateDeltaData, DeltaTransformSet.deltaConstruction save needs to call calculateDeltaData with UNIDIRECTIONAL if FORWARDS_DELTA needs to call calculateDeltaData with BIDIRECTIONAL if BIDIRECTIONAL_DELTA needs to call calculateDeltaData with UNIDIRECTIONAL if FULL_WITH_BACKWARDS_DELTA

How the output is managed is dependant on the transform type of the output as well as the preferred CommitType passed to save FULL, NONE, BIDIRECTIONAL is the same result regardless if the resultant TransformType is UNIDIRECTIONAL then: if the preffered CommitType is FORWARDS then we are just using the returned data as a forwards delta if the preffered CommitType is FULL_WITH_BACKWARDS then we need to use the full data and the returned data

calculateDeltaData is mostly a passthrough to either basic returns or DeltaTransformSet.deltaConstruction

DeltaTransformSet.deltaConstruction can return: tuple[ TransformType, tuple[ type| None, list[ tuple[ transformOperationId, transformData]]]| None| Any]

This doesnt need to change delta transforms of both directionalities can return list[ tuple[ transformOperationId, transformData]]] all elements of the sequence should be of the same directionality this includes any recursively contained delta lists Due to this recursive requirement, if the passed fallback to the containing level is FULL then that also must be the case for the called contained If the contained cannot match the prefference of the container then if FULL is active then the contained must not default to the opposite of the preferred delta type if OPPOSITE_DIRECTIONALITY_THEN_FULL is active then the containd should fallback to the opposite of the preffered and then full

i think this means that the contained fallback should match the fallback of the container

the preffered of the contained should match the preferred of the container if this is enforced by calculateDeltaData then we should be fine with just specifying the input prefferred and fallback for the contained

in the future it could also be made so that individual transforms in the delta list are either uni or bi directional

Defined indexing methods with contexts can transform between context but using same description can transform between methods not using same description format

defined conversions between types

to avoid doing extra work reimplementing same thing with different words

can translate between different contexts currently in indexing def there is outer with format and inners inside could have outer above that would each mark have own format probably more complex

this is very simplistic but it is generalising, doing lots of work with one complex world needs complex solution-s to interact with it but can decrease by cleverly picking solution-s so general solution is a thing to approach simplifying the world for the self complexifying it for those who would compete

makes sense to do this to avoid extra work will do when detect other need, will be hard, not good enough to symbolically promise to remember

need to quickly display rememberances

delta transform set too between q, w

( quote: conversions between types) uses python functions index changes could be described using reference frame from indexing this would be the ( level/ mark) named indexing method no it wouldnt be any ( level/ mark) it could be any as it isnt related to indexing and indexing does not have to be bound by delta transform set uses operations with data to represent on long term storage these are translated to functions could generalise and save function symbol and required argsj

preventing infinite work

Created with hopes that viewing these all together will allow for organisation that supports all of them

Check backlinks too

  • Where to store certain pieces of data that are used by lots of systemic components and how to access them UserConfig, KnownStoreInstances
  • When to ask

In a store, each data is seperated by id and then each data can have versions

What about what is returned by the store query? just seperated by id or by version too?

What about the data chart viewer? If representing version Ids uniquely, do we group by dataId in a tree like manner or do we use just a list. Upon multi selection in the data browser and editor, do we exclude versions?, this depends on how bulk actions should work

How do bulk actions work here, I think they should be implemented by the store, what of presnt memory representations

When loaded to an ltsd it is seperated by version and there can be one tracked ltsd in memory for each version

I envisioned having a version picker at the top of the data editor and so each data editor pane would be constructed for a data id but could be told to open upon a specific version. If it is editing ltsd then it should begin on a displayed specific version and any saves will save it as a new version. You can then pick a version from the list to copy from

So we have our store representation which holds info for all versions, then the in memory representation which is loaded from a specific version but after any changes is considered a potential new version

To delete a version of ltsd in store well that version could be any from: INIT FORWARDS_DELTA UNIDIRECTIONAL_DELTA FULL FULL_WITH_BACKWARDS_DELTA

If the version is at the start then if the commit after it is a FORWARDS_DELTA| UNIDIRECTIONAL_DELTA then that must be saved as a FULL

if both commits surrounding the deleted commit are FULL then there is nothing to be done

any delta type stored on the commit following the deleted ( deletedIndex+ 1) is now invalid and must be updated with any from: FORWARDS_DELTA| UNIDIRECTIONAL_DELTA| FULL| FULL_WITH_BACKWARDS_DELTA

Its state before deletion must be deconstructed
if saving as FULL then it can be saved then
if saving as a FORWARDS_DELTA| UNIDIRECTIONAL_DELTA| FULL_WITH_BACKWARDS_DELTA then a delta must be constructed between ( deletedIndex- 1), ( deletedIndex+ 1)

This is useful in the creation of an undo stack An example is the undo stack in vscode This is useful when different things ⩩ in a system are designed to interact with other things ↭ that may change over time and we want to maintain a consistant symbol to access that meaning. ⩩ can still interact with the version of ↭ it has been proscribed to act with.

We could store the data over and over again throughout each version A potential problem is the size of each storage if we are saving everything. An alternative would be to save the initial whole data and then save changes upon it from there. An issue with delta changes could be the time it takes to calculate the desired version that can be mitigated by updating the baseline to a specific version

Delta changes

What can this be? It could be the addition of a data field, the removal of a data field or the change of a data field on the top level. Could this be done for nested data e.g.: Here we are storing the Module at the top level

Module(
	   body= [
		   Assign(
			   ...
		   ),
		   Assign(
			   ...
		   )
		   Class(
			   body= [
				   Assign(
					   ...
				   )
			   ]
		   )
	   ]
)

We then add another assignment to the body of the class definition

Module(
	   body= [
		   Assign(
			   ...
		   ),
		   Assign(
			   ...
		   )
		   Class(
			   body= [
				   Assign(
					   ...
				   ),
				   Assign(
					   ...
				   )
			   ]
		   )
	   ]
)

Instead of recording it as a change upon the body attribute of the module data it could be recorded as a change on the body attribute of the module.body[ 2].body data The ability to do this would add a lot of weight to storing delta changes. Such comparison of the new and the old would have to occur anyway when constructing code that updates instanciated data from one version to another in Bulk data editing actions

We could also store additional information on how to implement delta changes. A reason for this is the desire to store large pieces of text and to avoid the large file sizes from repeated whole stores which would be the case if only the above were interested. How pieces of data may internally be updated is a matter of their type. In fact we could use attribute equality based delta changes mentioned above as a fallback for if a custom method of doing so is not provided as mentioned for strings.

Commit data

  • ID
  • Timestamp - Format seconds?
  • User name?
  • Computer name?
  • System information?

I dont think it seems right to enforce one to state the user and computer but it seems useful and a sensible default. System information may be more relevant for bug reporting

The id value could be an incrementing number for each version but in some use cases it is good to be able to discern between version iteration 5 from one store and version iteration 5 from another without having to state a consistent store. The only use for this I know of so far is when referencing which specification version you are following

versionInformtion= Optional[ bool| tuple[ str]]

So to update the record the design of the commit data is sitting at Commit id: str User: Optional[ str] Computer name: Optional[ str] Timestamp: float ( stored in seconds, first version is the time since 1970, 1, 1 and subsequent versions are since the ) Commit type: Literal[ INIT, DELTA, FULL]


So versioning can be implemented differently by the store as different store structures could make good use of the freedom

So the places in the code base in which the versioning system concretely exists is within the store interface protocol Maybe in other places to such as the LongTermStorageData I dont think anywhere else

So I want to be able to load multiple versions into memory at once? well different things could reference different versions that are running at once so yes i think so.

So we create our ltsd, we save this creates it in our store so we make some changes to the data and save again What happens now Well I want versioning to be off by default so it just overwrites the existing save

Now I have something that I want to be versioned well I create it and then I save, this creates the data in our store in the form of the first version. I now edit the data and save again creating a second version I now load the first version into memory again I edit the first memory rep and save

I think saving over versions within themselves defeats the point of versioning it would break the interaction point problem we intended on fixing So when we save again we create a new version

Now

Delta saving mechanism

This should be designed in tandem with Bulk data editing actions# ^9e834f

Delta change discernment and application design


So I think this works well however do we allow switching between versioned and not versioned?

So it works fine if we specify it solely at or before ltsd first save Perfectly possible but maybe too much responsibility so i think we should specify whether it is versioned at ltsd instanciation with a default to false

Application of a version history to find a version

So it isnt as simple as I thought it was. I thought that to find any version that I could find the closest full version and then apply deltas from that to the desired version. This is inadequate as:

  • Delta changes on a specific version are in reference to a conversion to and from ( the version before themselves) and ( themselves). Following this convention, in order to convert backwards from a full, delta information would have to be stored alongside it.
  • Full data is currently only dumped when a delta change isnt possible.

There is reason however to store full data alongside a delta and thats in order to reduce loading time if data has a large number of complex delta changes. The question is how does one go about creating the full with delta revisions. I think it should be at least a little automatic. An idea is to time the retrieval of data and if it is over some threshold, maybe a threshold respective of its binary size, then we should create a full entry with a delta. If we wanted to then save data we could delete some fulls provided that we ensure the path is still fully walkable, maybe this would just be the init

We know we can always convert forwards and we can only convert backwards if a full data has been saved alongside a delta

So the modifications I must do to this function are that we can only detect an afterIndex if it has a type of CommitType.FULL_AND_DELTA


So choice, do we take preprocessed only nescessary version data or do we do it ourselves, Doing it ourselves may require an outside source to deserialise more than is nescessary and taking only what is nescessary may shift a lot of work onto

We need to take a list of versionData What is nescessary, so we have our chain. It starts with an initial commit of the whole data and then a sequence of delta changes and full commits depending on what was possible. So upon recieving the whole list, a programatic aspect should discern which entry we want If the entry we want is a full commit then we can just return that If the entry we want is a delta change then we can look at the closest full commit by it and also possibly look at the total number of delta changes to reach it from ither side

""" [ 23, 6543, 2332, 6776, 3, 790] [ 0, 1, 2, 3, 4, 5 ] 6

2 is desired 1 is nearest full

the deltas of 2 describe the movement from 1 to 2 sum up transforms from [ 2: 3] == [ nearest+ 1: desired+ 1] and apply them forwards on 1

2 is desired 5 is nearest full the deltas applied backwards of 5, 4, 3 describe the movement from 5 to 2 sum up transforms from [ 3: 6] == [ desired+ 1: nearest+ 1] not quite: 5 must be full with delta 3, 4, 5 describe enough to translate back to 2, 5s transforms are obtained in a different manner it is [ 3: 5]+ transform of 5 so [ desired+ 1: nearest]+ transform of nearest """

What of an quick memory prescences version

I have the case where the deisgn is currently structured in a way that we construct a version upon construction it gets an initial version number, this is used to save the initial version and is kept with the quick memory representation upon a save we construct a new id, save with that id and keep that with the quick memory representation When pulling from the store we can specify a versionId and if that is in memory we give you that and if it isnt then we load it in and give it to you

A problem with this is if we pull the latest version, make some changes, require the latest version again, If we run the getLTSD function then we will recieve the in memory version If we run the _getLTSDComponents function then we can avoid the problem however we may encounter a scenario where we need to load in LTSD of the latest saved version whilst there is already LTSD created from the latest version in memory.

So my proposal is to create a new version id whenever we pull ltsd from the store into memory which is stored on the ltsd and is also used as the index in the stores memory. It can still store the id that it was loaded from in case we want to use the reload function. Upon saving it is still diffed against the latest present version in the store even if we have pulled from a version prior to the latest.

So to run over desired ltsd interactions:

intialCreation-> create versionId, loaded set to seperate new id. in dataInit: saved to quickMemMap using versionId, initial version saved using loaded id

This is another big rework and why?

So it may be clapped but the first way is how were rolling for now

Supporting existing systemic aspects

Filtering

Data editor gui

ModeDescriptionSpecified inputOutput alteration
AdditionData unitsThis could alter the list as filters can add data so a set of addition-s and removal-s
Removal
DeletionWhen data is deleted it is known that it is not needed in the

well there is not much point if the operation is going to reinclude all data during incremental filtration if added data can be influenced by removal-s then an accurate result can only be obtained by full filtration might as well just stick to full filtration