There is perhaps a more Pyhton-specific way of storing the data to be loaded into the Mongo database: a list of dictionaries. In this case, a dictionary is defined as {'name':__name__, 'service':__service__, 'web':__web__}.To add an element to the holding list (say, NHS): NHS.append({'name':'Wigan General', 'service':5, 'web':None}). Then, a function can be defined which will return the index of the list containing the element matching its parameter; i.e.:
>>> def idx(ky, val):
for item in NHS:
if item[ky] == val:
return NHS.index(item)
Usage:
>>> print idx('name', 'Wigan General')
will yield Wigan's index in the list. I'm quite curious how fast this is with several thousand records! But Python's ability to easily make sense of a complex data structure is impressive.
Another way of searching, using list comprehensions:
>>> def idx2(ky, val):
lstIdx = [item[ky] == val for item in NHS]
return lstIdx.index(True)
It would also be interesting to know if the bytecode generated by Python is different between the two.