Transaction process
- Transaction process
writing of events that occured during creation of this document
- wrote ( event operation)-s
- wrote a pathway-s of events that could occur using the described ( event operation)-s
- wrote how the info for a sepecific dataId present in the store using a specific saving function and unversioned+ versioned data
- wrote a suposed implementation for each event operation using the encountered scenarios
Saving function format
def lTSDSavingDecider(
lTSD: LongTermStorageData,
retrievalLocation: int,
retrievalSequenceLength: int,
attemptedSaves: list[ int], # A list of the retrieval index-s that attempted to save
sucsesfulSaves: list[ int], # A list of the retrieval index-s that resulted from a sucsesful save
priorityLocation: float, # The priority value passed with the save request
deleted: bool,
):...
Different event operation upon observation
Auxhilllary description info
Deletion handling
This occurs for a single process, dataId the instanciatedData for the given dataId in the given process is sent the deletion event and has the deleted state set to true
Pending deletion cleanup
the process can pop through all ( pending deletion)-s for the process and perform for each
Newly created info data
{
"numberOfRetrieval-s": 1,
"attemptedSave-s": [],
"successfulSave-s": [],
}
Upon process loading of store
Increment the process held count of the store by 1
instead the process count can be encoded by creating a set of ( pending deletion)-s for each process
this dictionary~s locator-s must be unique to the process amongst all interacting process-s this can therefore not be the pid as multiple machines can interact
it can be { UUID4}_{ PID}
-> ca662dd70d1048b7ae40e70976dc5a73_3012
in the lTSI.__init__
Upon load
perform Pending deletion cleanup
Check for the info of the piece If info present: There will be data so retrieve it the retrievalIndex is set to the current "numberOfRetrieval-s" the number of retrieval-s is incremented Else: If no data in the store then raise the unpresent error Else create the info entry, set the retrievalIndex to 0, retrieve that data Newly created info data
in the loadings of lTSI.retrieveData
Upon save
saving function retrieval sources: "lTSD": the saving lTSD "retrievalLocation": stored on the saving lTSD "retrievalSequenceLength": from the info specific to the dataId "attemptedSaves": from the info specific to the dataId "sucsesfulSaves": from the info specific to the dataId "priorityLocation": passed along with the save function "presentInStore": check if the data is present in the store
perform Pending deletion cleanup any saving instance will have info present so retrieve the info
"attemptedSave-s" on the dataId info has the retrievalIndex appended the save function is run using the retrieved input-s and returns a boolean result if the save function passes: the data is converted to the store format and saved if it is versioned then the current version creation process occurs and is saved the lTSD instance~s retrievalIndex is updated to the newest version and the "numberOfRetrieval-s" on the info is incremented by 1 "sucsesfulSave-s" on the info has the new retrievalIndex appended else: no op the info must be saved the boolean result is returned
in lTSD.save
or possibly lTSI.[writeNew...]
lTSD.save
Upon test save
perform Pending deletion cleanup the save function is run using the retrieved input and the boolean result is returned
in lTSD.testSave
Upon reload
perform Pending deletion cleanup
the info will be present in the store so load it in
if it is deleted then one cannot reload the data
if the data is unpresent then one cannot reload the data and an unpresent error can be raised, the current lTSD will not be in memory if it were not either present or deleted in current store scoped session so if deleted then it is unpresent, if undeleted then it is present
otherwise retrieve the data and tags from the store and update the instance
update the lTSD~s retrievalIndex to the newest version and increment info[ "numberOfRetrieval-s"]
in lTSD.reload
or lTSI._reloadData
lTSI._reloadData
Upon deletion
POSTPONED NOT DONE, WAITING ON BULK MANIP, MUST HAPPEN SOON perform Pending deletion cleanup The data is removed from the store the dataId is added to the pending deletion set of every process but the deleting process perform Deletion handling for the currently deleted dataId
nope-> in lTSI.deleteData
as it does not require data to be instanciated
instead-> lTSI.prescenceManipulation
dataId must be appended to the deletion set of all other process-s holding that store
All instance-s in the deletion initiating process are notified by signal and have their presentInStore state set to False Other process-s must react too by sending signal-s and updating state-s GO THROUGH ALL EVENTS AND MAKE SURE THIS IS BEING CHECKED FOR IN OTHER PROCESS-S This is different to data that is not present, and must be dealt with differently this can be signified by the info this also means that info cannot be stored upon the data itself as during deletion the piece data is removed but the signifying info needs to be present this also means that deleted data can recieve the normal input-s so in the initiating process, info[ "deleted"] can be set to True so instead of other process-s checking for ( the piece they are interacting with)~s deletion state, to get a quicker response to any deletion, the other process-s can check all deletions, perhaps a set of dataId-s can be kept for each process and then when a process comes to perform an operation, it can check for any id-s in this list and emit the ( deletion event)-s and set the ( deleted state)-s for all of these
this could be handled in prescence maniupulation
Upon transfer to another process
POSTPONED there is info present as it has to be loaded in at least one process so upon transfering the data the numberOfRetrieval-s must be incremented it is assumed that the initial instance remains intact and seperate if this were referring to shared memory then they are treated as a single instance the containing stores must be ensured to be present with in the recieving process as transferal and store creation is not yet designed i dont know which process this should occur in but it must occur only once in the sending process the deletion event m
perform Pending deletion cleanup in the source process if the sink process is new there is no need for Pending deletion cleanup although looping through an empty set is not expensive, perhaps there would be undesriable communication time if the sink process is not new then perform Pending deletion cleanup
in custom multiprocessing potentially, maybe in DataForTypes and certain types can have special functionality when instances are being transfered to another process lTSI and lTSD in this case or manual function to copy over data correctly
the retrievalIndex of the new instance lTSD must be set to the current numberOfRetrieval-s and the numberOfRetrieval-s must be incremented
Upon creation
perform Pending deletion cleanup There will be no info present so blank info must be created the retrievalIndex is set to 0
could unify this and loading this does incur an info retrieval cost for aspect-s which solely create, although mass creation is not seemingly a huge occurence, this is noted here and can be altered upon slowdown in that case, creation can specify solely creation
in lTSD.__init__
or lTSI._dataInit
Upon process exit
perform Pending deletion cleanup remove the ( pending deletion)-s process entry from the store if the ending process is the final one in the pendingDeletion-s then the aspect should clear all info too to save space
in lTSI.instanceExitCleanup
which is called currently by lTSII instance, storeInterface-s
( Happening sequence)-s
Happening sequence „
--Confined to any process-- --no prior operations in process-- --begins in process ^--
-
Q Upon load -> instance ſ of piece 5: creates info entry for piece 5 with the data:
{ "numberOfRetrieval-s": 1, "attemptedSave-s": [], "successfulSave-s": [], "deleted": False, }
retrievalLocation on the python lTSD object is set to 0
-
W Upon load -> instance ¶ of piece 5: edits the entry for piece 5: numberOfRetrievals+= 1 retrievalLocation on the python lTSD object is set to numberOfRetrievals before incrementation
-
E Upon test save of instance ſ: the loaded saving function of the python instance of the store is used and is called with a priority between ( 0, 1) retrivalLocation is obtained from the python lTSD object numberOfRetrievals is obtained from:
info[ "numberOfRetrieval-s"]
attemptedSaveLocation is obtained from:info[ "attemptedSave-s"]
successfulSaves is obtained frominfo[ "successfulSave-s"]
priotyLocation is obtained from the calling test function | the save function returns a boolean as to whether the save will commence -
R Upon save of instance ſ: the information will be collected and passed to the saving function if the saving function will return a boolean value as to save if sucsesful: the memory instance data is converted to the store format and saved, ( unversioned replaced| version constructed and appended)
info[ "successfulSave-s"]
is incremented the caller recieves True elif not sucsesful: the caller recieves Falseinfo[ "attemptedSave-s"]
is incremented -
T Upon save of instance ¶: as R
-
Y Upon load -> instance ŧ: as W
-
U Upon save of instance ¶: as R
-
I Upon save of instance ŧ: as R
-
O Upon reload of instance ¶: retrievalLocation on the in memory lTSD is set to the current length data is retrieved from the store and used to update the data and tags field of the LTSD
info[ "numberOfRetrieved"]
+= 1 -
P Upon save of instance ŧ: as R
-
A Upon save of instance ¶: as R
-
S Upon save of instance ¶: as R
--path „--
- D Upon deletion of piece 5: Those connected to the delete signal are notified A deleted response is given to any who would save after deletion, well some would allow for code to be written agnostic of saving method, they will have to react to deletion in some manner, either by passed reponse or by language error handling A deleted state can be queried upon all instanciated lTSD's of a deleted piece
- F Upon reload of instance ŧ: The requested data is no longer present in the store, either a deleted response is returned or a handlable error is raised maybe simply a not present in store response and no deletion response A response is fine, an error must be designed around more specifically to prevent program halt, a response will not halt by default The ability to give a non halting response which can be very simply converted to a halting response would be good For now it can return a enum
- G Upon save of instance ¶: as R
--path ¢--
-
D Upon transfer to another process process m: the instance now within the other process is now a new instance and is given the same treatment as loading of the piece
-
F Upon creation of piece 8 in process m: the retrieval index of the data is set to 0 info is created in the store
-
G Upon deletion of piece 5 from process m: those in process m are sent the delete signal all ltsd instance-s in process m have the deleted state set upon them
-
H Upon load of piece 5 from process ^: the data can be presented as not present
-
J Upon process exit the last process must know it is the last process accessing the store and so the store must store all connected process-s remove the info for each piece
merge all concepts into common operation for the header-s include handling of events that may have occured in other process-s include design of what should occur with versioned data my hunch is that it should be specific to the dataId not the versionId as we are conceptually constructing a single item concepts may have to be altered however
Saving function ←
def firstRequestSaves(
lTSD: LongTermStorageData,
retrievalIndex: int,
retrievalSequenceLength: int| None,
attemptedSaves: list[ int]| None,
sucsesfulSaves: list[ int]| None,
priorityLocation: float,
presentInStore: bool,
)-> bool:
if not presentInStore:
return False
if len( sucsesfulSaves)== 0:
return True
if retrievalIndex>= sucsesfulSaves[ -1]:
return True
return False
Scoping-s
Scoped to a single store using unversioned data
using Saving function ←
blank
- Q:
r: 1, a: [], s: []
->0
- W:
r: 2, a: [], s: []
->1
- E:
no transform
- R:
r: 3, a: [ 0], s: [ 2]
0-> 2
the saving function will return True: info saved, info component-s incremented - T:
r: 3, a: [ 0, 1], s: [ 2]
nothing saved 1< 2 - Y:
r: 4, a: [ 0, 1], s: [ 2]
->3
- U:
r: 4, a: [ 0, 1, 1], s: [ 2]
nothing saved 1< 2 - I:
r: 5, a: [ 0, 1, 1, 3], s: [ 2, 4]
3-> 4
- O:
r: 6, a: [ 0, 1, 1, 3], s: [ 2, 4]
1-> 5
- P:
r: 7, a: [ 0, 1, 1, 3, 4], s: [ 2, 4, 6]
4-> 6
// This should be able to save, the philosophy of Saving function ← is that a save is permitted if it is the first in it~s "group" and after all those in the group are invalid // A reload of an old piece to the current latest data, entry into the current valid "group" does not constitute invalidation of others in the current "group" - A:
r: 7, a: [ 0, 1, 1, 3, 5], s: [ 2, 4, 6]
5-> 6
- S:
r: 8, a: [ 0, 1, 1, 3, 5, 6], s: [ 2, 4, 6, 7]
6-> 7
--„-- - D:
blank
deletion of info - F:
blank
- G:
blank
function ← means no saving post deletion and so --¢-- - D:
r: 9, a: [ 0, 1, 1, 3, 5, 6], s: [ 2, 4, 6, 7]
->8
- F:
r: 1, a: [], s: []
->0
- G:
blank all info
Scoped to a single store using versioned data
using Saving function ←
blank
- Q:
r: 1, a: [], s: []
->0
create info entry for the dataId the constructing versionId is different to the loaded versionId - W:
r: 2, a: [], s: []
->1
the constructing versionId is different to the loaded versionId and the versionId of instance ſ - E:
no change
return True - R:
r: 3, a: [ 0], s: [ 2]
0-> 2
sucses! ( instance ſ)~s retrievalIndex is changed to 2, it~s loaded versionId is the previous constructing and the constructing versionId is newly generated - T:
r: 3, a: [ 0, 1], s: [ 2]
failure - Y:
r: 4, a: [ 0, 1], s: [ 2]
->3
new instance loaded in with loaded versionId of last saved so the instance that was retrievalIndex 0 - U:
r: 3, a: [ 0, 1, 1], s: [ 2]
failure - I:
r: 5, a: [ 0, 1, 1, 3], s: [ 2, 4]
3-> 4
- O:
r: 6, a: [ 0, 1, 1, 3], s: [ 2, 4]
1-> 5
- P:
- A:
- S: --„--
- D:
- F:
- G: --¢--
- D:
- F:
- G:
Added aspect-s that ( store interface)-s need to implement
-
lTSI._createPendingDeletionsEntry()
: unique pid creation can be left to implementation, they can usegetInstanceData( "uniqueProcessId")
-
( Upon load, # Newly created info data) should be dealt with in the implementation~s lTSD retrieval mechanism in the cursor retrieved by retrieveData pending deletion cleanup is handled by the retrieveData implementation wrapper
-
lTSI._pendingDeletionCleanup()
-
lTSI._handleDeletion( dataId: str)
-
lTSD._retrievalIndex
-
lTSI._retrieveSaveFunctionInput( dataId: str)-> anottated coolly
-
lTSI.savingFunction
need to ser function-s fine -
lTSI._updateDataInfoWithSaveResult( originalRetrievalIndex: int, sucses: bool)
should append the original retrieval index toinfo[ "attemptedSave-s"]
. If sucses theninfo[ "numberOfRetrieval-s"]
should be incremented, the value before incrementation should be ( appended toinfo[ "sucsesfulSave-s"]
, returned), it not sucses the originalRetrievalIndex should be returned This does not utilise other function-s as to avoid repeat transaction-s -
lTSI._getLTSDComponents
should raise and unpresent error if so -
lTSI._incrementRetrievalCount( dataId: str)
needs to incrementinfo[ "numberOfRetrieval-s"]
and return it~s value before incrementation -
lTSI._createNewDataInfo( dataId: str)
creates the new info entry according to this -
the cursor in
lTSI._retrieveDataStoreSpecific
should either create or update info for new lTSD's -
lTSI._createPendingDeletionsEntry()
append-s the store pid to the store known
DB IMPS
mongo can have a collection called sessionInfo this can have the unique pid-s of proc-s with pending deletions along with the info for each data id, perhaps the deletion-s can be a document and then all info-s are their own document
no there are deletion-s for each proc so an id per proc in one collection then an id per per data in another document
so fsjson can do similar with proc deletion-s, info for data can be done in a similar manner this requires three folder-s of file-s
All same result-s as unversioned, specific to dataId not versionId
should transaction history be wiped ever? the possible may be needed to be accounted for so 32 bit int fairly big but wipe after a session to be sure or wipe on startup if too big if no shutdown callback can be derived
what should happen upon a save, does a save reset a transaction state in mark~s example the pulls made before the first save can no longer save this signifies an end of a transaction session when a transaction session
when an aspect requests to save the save function is run, currently there is only python, this save function can be stored as a symbol reference
retrievalLocation -> obtained from mem object retrievalSequanceLength -> obtained from store savingLocation -> obtained from store priorityLocation -> obtained from save call deleted -> obtained from store: