Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

delete from SpikeSortingOutput exceeds max attempts #886

Closed
sharon-chiang opened this issue Mar 22, 2024 · 15 comments
Closed

delete from SpikeSortingOutput exceeds max attempts #886

sharon-chiang opened this issue Mar 22, 2024 · 15 comments
Assignees
Labels
bug Something isn't working infrastructure Unix, MySQL, etc. settings/issues impacting users

Comments

@sharon-chiang
Copy link
Contributor

sharon-chiang commented Mar 22, 2024

I get the below error when trying to delete from SortGroup.

Code:

import spyglass.spikesorting.v0.spikesorting_recording as sgss
(sgss.SortGroup & {'nwb_file_name': 'J1620210529_.nwb',
                 'sort_group_id': 100}).cautious_delete()
Error stack
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/classes/digraph.py:899, in DiGraph.successors(self, n)
    898 try:
--> 899     return iter(self._succ[n])
    900 except KeyError as err:

KeyError: '`spikesorting_merge`.`spike_sorting_output`'

The above exception was the direct cause of the following exception:

NetworkXError                             Traceback (most recent call last)
Cell In[8], line 2
      1 import spyglass.spikesorting.v0.spikesorting_recording as sgss
----> 2 (sgss.SortGroup & {'nwb_file_name': 'J1620210529_.nwb',
      3                  'sort_group_id': 100}).cautious_delete()

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:452, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
    449 if not force_permission:
    450     self._check_delete_permission()
--> 452 merge_deletes = self.delete_downstream_merge(
    453     dry_run=True,
    454     disable_warning=True,
    455     return_parts=False,
    456 )
    458 safemode = (
    459     dj.config.get("safemode", True)
    460     if kwargs.get("safemode") is None
    461     else kwargs["safemode"]
    462 )
    464 if merge_deletes:

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:248, in SpyglassMixin.delete_downstream_merge(self, restriction, dry_run, reload_cache, disable_warning, return_parts, **kwargs)
    245 restriction = restriction or self.restriction or True
    247 merge_join_dict = {}
--> 248 for name, chain in self._merge_chains.items():
    249     join = chain.join(restriction)
    250     if join:

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner)
    991 val = cache.get(self.attrname, _NOT_FOUND)
    992 if val is _NOT_FOUND:
--> 993     val = self.func(instance)
    994     try:
    995         cache[self.attrname] = val

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:172, in SpyglassMixin._merge_chains(self)
    161 """Dict of chains to merges downstream of self
    162 
    163 Format: {full_table_name: TableChains}.
   (...)
    169 delete_downstream_merge call.
    170 """
    171 merge_chains = {}
--> 172 for name, merge_table in self._merge_tables.items():
    173     chains = TableChains(self, merge_table, connection=self.connection)
    174     if len(chains):

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner)
    991 val = cache.get(self.attrname, _NOT_FOUND)
    992 if val is _NOT_FOUND:
--> 993     val = self.func(instance)
    994     try:
    995         cache[self.attrname] = val

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:150, in SpyglassMixin._merge_tables(self)
    147             merge_tables[master_name] = master
    148             search_descendants(master)
--> 150 _ = search_descendants(self)
    152 logger.info(
    153     f"Building merge cache for {self.table_name}.\n\t"
    154     + f"Found {len(merge_tables)} downstream merge tables"
    155 )
    157 return merge_tables

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:148, in SpyglassMixin._merge_tables.<locals>.search_descendants(parent)
    146 if MERGE_PK in master.heading.names:
    147     merge_tables[master_name] = master
--> 148     search_descendants(master)

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:138, in SpyglassMixin._merge_tables.<locals>.search_descendants(parent)
    137 def search_descendants(parent):
--> 138     for desc in parent.descendants(as_objects=True):
    139         if (
    140             MERGE_PK not in desc.heading.names
    141             or not (master_name := get_master(desc.full_table_name))
    142             or master_name in merge_tables
    143         ):
    144             continue

File ~/src/datajoint-python/datajoint/table.py:215, in Table.descendants(self, as_objects)
    207 def descendants(self, as_objects=False):
    208     """
    209 
    210     :param as_objects: False - a list of table names; True - a list of table objects.
    211     :return: list of tables descendants in topological order.
    212     """
    213     return [
    214         FreeTable(self.connection, node) if as_objects else node
--> 215         for node in self.connection.dependencies.descendants(self.full_table_name)
    216         if not node.isdigit()
    217     ]

File ~/src/datajoint-python/datajoint/dependencies.py:170, in Dependencies.descendants(self, full_table_name)
    165 """
    166 :param full_table_name:  In form `schema`.`table_name`
    167 :return: all dependent tables sorted in topological order.  Self is included.
    168 """
    169 self.load(force=False)
--> 170 nodes = self.subgraph(nx.algorithms.dag.descendants(self, full_table_name))
    171 return unite_master_parts(
    172     [full_table_name] + list(nx.algorithms.dag.topological_sort(nodes))
    173 )

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/utils/backends.py:412, in _dispatch.__call__(self, backend, *args, **kwargs)
    409 def __call__(self, /, *args, backend=None, **kwargs):
    410     if not backends:
    411         # Fast path if no backends are installed
--> 412         return self.orig_func(*args, **kwargs)
    414     # Use `backend_name` in this function instead of `backend`
    415     backend_name = backend

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/algorithms/dag.py:73, in descendants(G, source)
     39 @nx._dispatch
     40 def descendants(G, source):
     41     """Returns all nodes reachable from `source` in `G`.
     42 
     43     Parameters
   (...)
     71     ancestors
     72     """
---> 73     return {child for parent, child in nx.bfs_edges(G, source)}

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/algorithms/dag.py:73, in <setcomp>(.0)
     39 @nx._dispatch
     40 def descendants(G, source):
     41     """Returns all nodes reachable from `source` in `G`.
     42 
     43     Parameters
   (...)
     71     ancestors
     72     """
---> 73     return {child for parent, child in nx.bfs_edges(G, source)}

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/algorithms/traversal/breadth_first_search.py:203, in bfs_edges(G, source, reverse, depth_limit, sort_neighbors)
    199     yield from generic_bfs_edges(
    200         G, source, lambda node: iter(sort_neighbors(successors(node))), depth_limit
    201     )
    202 else:
--> 203     yield from generic_bfs_edges(G, source, successors, depth_limit)

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/algorithms/traversal/breadth_first_search.py:103, in generic_bfs_edges(G, source, neighbors, depth_limit, sort_neighbors)
    101 n = len(G)
    102 depth = 0
--> 103 next_parents_children = [(source, neighbors(source))]
    104 while next_parents_children and depth < depth_limit:
    105     this_parents_children = next_parents_children

File ~/Documents/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/classes/digraph.py:901, in DiGraph.successors(self, n)
    899     return iter(self._succ[n])
    900 except KeyError as err:
--> 901     raise NetworkXError(f"The node {n} is not in the digraph.") from err

NetworkXError: The node `spikesorting_merge`.`spike_sorting_output` is not in the digraph.

@CBroz1

@CBroz1 CBroz1 changed the title Cannot delete from SortGroup table delete_downstream_merge finds a table not in the graph Mar 22, 2024
@CBroz1
Copy link
Member

CBroz1 commented Mar 22, 2024

I'm not 100% sure what's going on here. delete_downstream_merge...

  1. Finds a table in the graph that looks like a merge table
  2. Attempts to call that table as an object, but fails to find it in the graph.

That suggests that the graph DJ is using doesn't have access to all the same nodes that Spyglass does.

It would hep me debug if I knew...

  1. Which tables were imported before this was called? Does it happen if only the code snippet above is run?
  2. Does it happen if the failed table, SpikeSortingOutput is imported before running?

A known issue with delete_downstream_merge is that the tables have to be loaded in order to be accessed, but, when that isn't the case, it typically just fails to find them at all, which results in the 'can't delete part' error we had before adding this step

@sharon-chiang
Copy link
Contributor Author

  1. No tables. Yes, this happens if only the code snippet above is run.
  2. I cannot import the SpikeSortingOutput table due to a module not found error, and it's not clear to me why. I am up to date on master. I get this error.
import spyglass.spikesorting.spikesorting_merge.SpikeSortingOutput

Error:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[7], line 1
----> 1 import spyglass.spikesorting.spikesorting_merge.SpikeSortingOutput

ModuleNotFoundError: No module named 'spyglass.spikesorting.spikesorting_merge.SpikeSortingOutput'; 'spyglass.spikesorting.spikesorting_merge' is not a package

@CBroz1
Copy link
Member

CBroz1 commented Mar 22, 2024

Please try

from spyglass.spikesorting.spikesorting_merge import SpikeSortingOutput
import spyglass.spikesorting.v0.spikesorting_recording as sgss

(sgss.SortGroup & {
    'nwb_file_name': 'J1620210529_.nwb',
    'sort_group_id': 100
}).cautious_delete()

@sharon-chiang
Copy link
Contributor Author

sharon-chiang commented Mar 22, 2024

Thanks Chris. That works to delete some, but yields this error:

Error stack
---------------------------------------------------------------------------
DataJointError                            Traceback (most recent call last)
Cell In[8], line 4
      1 from spyglass.spikesorting.spikesorting_merge import SpikeSortingOutput
      2 import spyglass.spikesorting.v0.spikesorting_recording as sgss
----> 4 (sgss.SortGroup & {
      5     'nwb_file_name': 'J1620210529_.nwb',
      6     'sort_group_id': 100
      7 }).cautious_delete()

File ~/src/spyglass/src/spyglass/utils/dj_mixin.py:479, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
    476         self._log_use(start)
    477         return
--> 479 super().delete(*args, **kwargs)  # Additional confirm here
    481 self._log_use(start=start, merge_deletes=merge_deletes)

File ~/src/datajoint-python/datajoint/table.py:586, in Table.delete(self, transaction, safemode, force_parts)
    584 # Cascading delete
    585 try:
--> 586     delete_count = cascade(self)
    587 except:
    588     if transaction:

File ~/src/datajoint-python/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
    554     else:
    555         child &= table.proj()
--> 556     cascade(child)
    557 else:
    558     deleted.add(table.full_table_name)

File ~/src/datajoint-python/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
    554     else:
    555         child &= table.proj()
--> 556     cascade(child)
    557 else:
    558     deleted.add(table.full_table_name)

    [... skipping similar frames: Table.delete.<locals>.cascade at line 556 (1 times)]

File ~/src/datajoint-python/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
    554     else:
    555         child &= table.proj()
--> 556     cascade(child)
    557 else:
    558     deleted.add(table.full_table_name)

File ~/src/datajoint-python/datajoint/table.py:566, in Table.delete.<locals>.cascade(table)
    564         break
    565 else:
--> 566     raise DataJointError("Exceeded maximum number of delete attempts.")
    567 return delete_count

DataJointError: Exceeded maximum number of delete attempts.

@CBroz1 CBroz1 self-assigned this Mar 22, 2024
@CBroz1 CBroz1 changed the title delete_downstream_merge finds a table not in the graph delete from SpikeSortingOutput exceeds max attempts Mar 22, 2024
@CBroz1
Copy link
Member

CBroz1 commented Mar 22, 2024

Ok, I'll look into this. That looks like cautious_delete did what it was supposed to, but the table structure results in some issues with DJ's delete process

EDIT: There's some kind of circularity going on during this delete
image

[2024-03-25 11:16:08,853][INFO]: Cascading `spikesorting_recording`.`sort_group`, 0/50
[2024-03-25 11:16:08,866][INFO]: Cascading `spikesorting_recording`.`sort_group__sort_group_electrode`, 0/50
[2024-03-25 11:16:08,886][INFO]: Deleting 58 from `spikesorting_recording`.`sort_group__sort_group_electrode`
[2024-03-25 11:16:08,887][INFO]: Cascading `spikesorting_recording`.`sort_group`, 1/50
[2024-03-25 11:16:08,899][INFO]: Cascading `spikesorting_recording`.`spike_sorting_recording_selection`, 0/50
[2024-03-25 11:16:08,912][INFO]: Cascading `spikesorting_recording`.`__spike_sorting_recording`, 0/50
[2024-03-25 11:16:08,925][INFO]: Cascading `spikesorting_artifact`.`artifact_detection_selection`, 0/50
[2024-03-25 11:16:08,938][INFO]: Cascading `spikesorting_artifact`.`__artifact_detection`, 0/50
[2024-03-25 11:16:08,959][INFO]: Deleting 5 from `spikesorting_artifact`.`__artifact_detection`
[2024-03-25 11:16:08,959][INFO]: Cascading `spikesorting_artifact`.`artifact_detection_selection`, 1/50
[2024-03-25 11:16:08,972][INFO]: Cascading `spikesorting_artifact`.`artifact_removed_interval_list`, 0/50
[2024-03-25 11:16:08,985][INFO]: Cascading `spikesorting_sorting`.`spike_sorting_selection`, 0/50
> delete_quick returns 0, resulting in loop

image

@xlsun79
Copy link
Contributor

xlsun79 commented Mar 27, 2024

Similar issue here when trying to delete one entry from the Nwbfile table so I could re-insert the correct data from the same day:

NetworkXError: The node lfp_merge.l_f_p_output is not in the graph.

@CBroz1
Copy link
Member

CBroz1 commented Mar 27, 2024

Hi @xlsun79 - The missing node error can be solved by importing the table and attempting to rerun. If you see a 'max attempt' error even after importing, please port your error stack in the following format

<details><summary>Error stack</summary>
```python
# Stack here
```
</details>

@xlsun79
Copy link
Contributor

xlsun79 commented Mar 28, 2024

Thanks @CBroz1 ! I imported all the merge tables and reran, which solved the table not in the graph error. I didn't ran into a max attempt error, but had the follows happening:
Code:

nwb_file_name = "Lewis20240222_.nwb"
(Nwbfile() & {'nwb_file_name':nwb_file_name}).cautious_delete()
Error stack
---------------------------------------------------------------------------
IntegrityError                            Traceback (most recent call last)
Cell In [17], line 1
----> 1 (Nwbfile() & {'nwb_file_name':nwb_copy_file_name}).cautious_delete()

File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:479, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
    476         self._log_use(start)
    477         return
--> 479 super().delete(*args, **kwargs)  # Additional confirm here
    481 self._log_use(start=start, merge_deletes=merge_deletes)

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/table.py:561, in Table.delete(self, transaction, safemode, force_parts)
    559 # Cascading delete
    560 try:
--> 561     delete_count = cascade(self)
    562 except:
    563     if transaction:

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/table.py:479, in Table.delete.<locals>.cascade(table)
    477 for _ in range(max_attempts):
    478     try:
--> 479         delete_count = table.delete_quick(get_count=True)
    480     except IntegrityError as error:
    481         match = foreign_key_error_regexp.match(error.args[0]).groupdict()

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/table.py:453, in Table.delete_quick(self, get_count)
    448 """
    449 Deletes the table without cascading and without user prompt.
    450 If this table has populated dependent tables, this will fail.
    451 """
    452 query = "DELETE FROM " + self.full_table_name + self.where_clause()
--> 453 self.connection.query(query)
    454 count = (
    455     self.connection.query("SELECT ROW_COUNT()").fetchone()[0]
    456     if get_count
    457     else None
    458 )
    459 self._log(query[:255])

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/connection.py:340, in Connection.query(self, query, args, as_dict, suppress_warnings, reconnect)
    338 cursor = self._conn.cursor(cursor=cursor_class)
    339 try:
--> 340     self._execute_query(cursor, query, args, suppress_warnings)
    341 except errors.LostConnectionError:
    342     if not reconnect:

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/connection.py:296, in Connection._execute_query(cursor, query, args, suppress_warnings)
    294         cursor.execute(query, args)
    295 except client.err.Error as err:
--> 296     raise translate_query_error(err, query)

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/datajoint/connection.py:294, in Connection._execute_query(cursor, query, args, suppress_warnings)
    291         if suppress_warnings:
    292             # suppress all warnings arising from underlying SQL library
    293             warnings.simplefilter("ignore")
--> 294         cursor.execute(query, args)
    295 except client.err.Error as err:
    296     raise translate_query_error(err, query)

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/cursors.py:148, in Cursor.execute(self, query, args)
    144     pass
    146 query = self.mogrify(query, args)
--> 148 result = self._query(query)
    149 self._executed = query
    150 return result

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/cursors.py:310, in Cursor._query(self, q)
    308 self._last_executed = q
    309 self._clear_result()
--> 310 conn.query(q)
    311 self._do_get_result()
    312 return self.rowcount

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:548, in Connection.query(self, sql, unbuffered)
    546     sql = sql.encode(self.encoding, "surrogateescape")
    547 self._execute_command(COMMAND.COM_QUERY, sql)
--> 548 self._affected_rows = self._read_query_result(unbuffered=unbuffered)
    549 return self._affected_rows

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:775, in Connection._read_query_result(self, unbuffered)
    773 else:
    774     result = MySQLResult(self)
--> 775     result.read()
    776 self._result = result
    777 if result.server_status is not None:

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:1156, in MySQLResult.read(self)
   1154 def read(self):
   1155     try:
-> 1156         first_packet = self.connection._read_packet()
   1158         if first_packet.is_ok_packet():
   1159             self._read_ok_packet(first_packet)

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/connections.py:725, in Connection._read_packet(self, packet_type)
    723     if self._result is not None and self._result.unbuffered_active is True:
    724         self._result.unbuffered_active = False
--> 725     packet.raise_for_error()
    726 return packet

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/protocol.py:221, in MysqlPacket.raise_for_error(self)
    219 if DEBUG:
    220     print("errno =", errno)
--> 221 err.raise_mysql_exception(self._data)

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/pymysql/err.py:143, in raise_mysql_exception(data)
    141 if errorclass is None:
    142     errorclass = InternalError if errno < 1000 else OperationalError
--> 143 raise errorclass(errno, errval)

IntegrityError: (1217, 'Cannot delete or update a parent row: a foreign key constraint fails')

@xlsun79
Copy link
Contributor

xlsun79 commented Mar 28, 2024

I have some updates from trying to debug the last error. I figured that the foreign key error may be due to the requirement to delete child tables before deleting the parent table. So I ended up trying to delete sgc.Session() but then got a different error:
Code:
(sgc.Session() & {'nwb_file_name':nwb_copy_file_name}).cautious_delete()

Error
[23:55:14][INFO] Spyglass: Queueing delete for session(s):
*nwb_file_name *lab_member_na
+------------+ +------------+
Lewis20240222_ Xulu Sun      
 (Total: 1)

[23:55:16][INFO] Spyglass: Building merge cache for _session.
	Found 4 downstream merge tables
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In [32], line 1
----> 1 (sgc.Session() & {'nwb_file_name':nwb_copy_file_name}).cautious_delete()

File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:452, in SpyglassMixin.cautious_delete(self, force_permission, *args, **kwargs)
    449 if not force_permission:
    450     self._check_delete_permission()
--> 452 merge_deletes = self.delete_downstream_merge(
    453     dry_run=True,
    454     disable_warning=True,
    455     return_parts=False,
    456 )
    458 safemode = (
    459     dj.config.get("safemode", True)
    460     if kwargs.get("safemode") is None
    461     else kwargs["safemode"]
    462 )
    464 if merge_deletes:

File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:248, in SpyglassMixin.delete_downstream_merge(self, restriction, dry_run, reload_cache, disable_warning, return_parts, **kwargs)
    245 restriction = restriction or self.restriction or True
    247 merge_join_dict = {}
--> 248 for name, chain in self._merge_chains.items():
    249     join = chain.join(restriction)
    250     if join:

File ~/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner)
    991 val = cache.get(self.attrname, _NOT_FOUND)
    992 if val is _NOT_FOUND:
--> 993     val = self.func(instance)
    994     try:
    995         cache[self.attrname] = val

File ~/code/spyglass/src/spyglass/utils/dj_mixin.py:173, in SpyglassMixin._merge_chains(self)
    171 merge_chains = {}
    172 for name, merge_table in self._merge_tables.items():
--> 173     chains = TableChains(self, merge_table, connection=self.connection)
    174     if len(chains):
    175         merge_chains[name] = chains

File ~/code/spyglass/src/spyglass/utils/dj_chains.py:76, in TableChains.__init__(self, parent, child, connection)
     74 self.part_names = [part.full_table_name for part in parts]
     75 self.chains = [TableChain(parent, part) for part in parts]
---> 76 self.has_link = any([chain.has_link for chain in self.chains])

File ~/code/spyglass/src/spyglass/utils/dj_chains.py:76, in <listcomp>(.0)
     74 self.part_names = [part.full_table_name for part in parts]
     75 self.chains = [TableChain(parent, part) for part in parts]
---> 76 self.has_link = any([chain.has_link for chain in self.chains])

File ~/code/spyglass/src/spyglass/utils/dj_chains.py:231, in TableChain.has_link(self)
    225 """Return True if parent is linked to child.
    226 
    227 If not searched, search for path. If searched and no link is found,
    228 return False. If searched and link is found, return True.
    229 """
    230 if not self._searched:
--> 231     _ = self.path
    232 return self.link_type is not None

File ~/anaconda3/envs/spyglass/lib/python3.9/functools.py:993, in cached_property.__get__(self, instance, owner)
    991 val = cache.get(self.attrname, _NOT_FOUND)
    992 if val is _NOT_FOUND:
--> 993     val = self.func(instance)
    994     try:
    995         cache[self.attrname] = val

File ~/code/spyglass/src/spyglass/utils/dj_chains.py:300, in TableChain.path(self)
    297     return None
    299 link = None
--> 300 if link := self.find_path(directed=True):
    301     self.link_type = "directed"
    302 elif link := self.find_path(directed=False):

File ~/code/spyglass/src/spyglass/utils/dj_chains.py:285, in TableChain.find_path(self, directed)
    283     if not prev_table:
    284         raise ValueError("Alias node found without prev table.")
--> 285     attr_map = self.graph[table][prev_table]["attr_map"]
    286     ret[prev_table]["attr_map"] = attr_map
    287 else:

File ~/anaconda3/envs/spyglass/lib/python3.9/site-packages/networkx/classes/coreviews.py:54, in AtlasView.__getitem__(self, key)
     53 def __getitem__(self, key):
---> 54     return self._atlas[key]

KeyError: '`position_v1_dlc_centroid`.`__d_l_c_centroid`'

@edeno edeno added bug Something isn't working infrastructure Unix, MySQL, etc. settings/issues impacting users labels Mar 29, 2024
CBroz1 added a commit to CBroz1/spyglass that referenced this issue Mar 29, 2024
@xlsun79
Copy link
Contributor

xlsun79 commented Apr 14, 2024

Hi @CBroz1 I was wondering if there's any solution to fix the error above when I was trying to delete my entry in the sgc.Session() table before being able to delete that from Nwbfile() so I could reinsert the correct data. Otherwise I wouldn't be able to analyze data from that day. Thank you!

@CBroz1
Copy link
Member

CBroz1 commented Apr 18, 2024

In a possibly related case, a user reported that cascade failed on this recording table because it yielded an invalid restriction...

DELETE FROM `spikesorting_v1_recording`.`__spike_sorting_recording` WHERE ( (`nwb_file_name`="bobrick20231204_.nwb"))
Error stack
session_entry = sgc.Session & {'nwb_file_name': nwb_copy_file_name}
session_entry.super_delete()

---------------------------------------------------------------------------
UnknownAttributeError                     Traceback (most recent call last)
Cell In[49], line 4
      2 nwb_copy_file_name = get_nwb_copy_filename(nwb_file_name)
      3 session_entry = sgc.Session & {'nwb_file_name': nwb_copy_file_name}
----> 4 session_entry.super_delete()

File ~/Documents/gabby/spyglass/src/spyglass/utils/dj_mixin.py:537, in SpyglassMixin.super_delete(self, *args, **kwargs)
    535 logger.warning("!! Using super_delete. Bypassing cautious_delete !!")
    536 self._log_use(start=time(), super_delete=True)
--> 537 super().delete(*args, **kwargs)

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:586, in Table.delete(self, transaction, safemode, force_parts)
    584 # Cascading delete
    585 try:
--> 586     delete_count = cascade(self)
    587 except:
    588     if transaction:

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
    554     else:
    555         child &= table.proj()
--> 556     cascade(child)
    557 else:
    558     deleted.add(table.full_table_name)

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
    554     else:
    555         child &= table.proj()
--> 556     cascade(child)
    557 else:
    558     deleted.add(table.full_table_name)

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:556, in Table.delete.<locals>.cascade(table)
    554     else:
    555         child &= table.proj()
--> 556     cascade(child)
    557 else:
    558     deleted.add(table.full_table_name)

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:504, in Table.delete.<locals>.cascade(table)
    502 for _ in range(max_attempts):
    503     try:
--> 504         delete_count = table.delete_quick(get_count=True)
    505     except IntegrityError as error:
    506         match = foreign_key_error_regexp.match(error.args[0]).groupdict()

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/table.py:463, in Table.delete_quick(self, get_count)
    458 """
    459 Deletes the table without cascading and without user prompt.
    460 If this table has populated dependent tables, this will fail.
    461 """
    462 query = "DELETE FROM " + self.full_table_name + self.where_clause()
--> 463 self.connection.query(query)
    464 count = (
    465     self.connection.query("SELECT ROW_COUNT()").fetchone()[0]
    466     if get_count
    467     else None
    468 )
    469 self._log(query[:255])

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/connection.py:340, in Connection.query(self, query, args, as_dict, suppress_warnings, reconnect)
    338 cursor = self._conn.cursor(cursor=cursor_class)
    339 try:
--> 340     self._execute_query(cursor, query, args, suppress_warnings)
    341 except errors.LostConnectionError:
    342     if not reconnect:

File ~/mambaforge/envs/gabby_spyglass_env/lib/python3.9/site-packages/datajoint/connection.py:296, in Connection._execute_query(cursor, query, args, suppress_warnings)
    294         cursor.execute(query, args)
    295 except client.err.Error as err:
--> 296     raise translate_query_error(err, query)

UnknownAttributeError: Unknown column 'nwb_file_name' in 'where clause'

edeno pushed a commit that referenced this issue Apr 19, 2024
* #892

* #885

* #879

* Partial address of #860

* Update Changelog

* Partial solve of #886 - Ask import

* Fix failing tests

* Add note on order of inheritace

* #933

* Could not replicate fill_nan error. Reverting except clause
edeno added a commit that referenced this issue Apr 25, 2024
* Add spyglass version to created analysis nwb files (#897)

* Add sg version to created analysis nwb files

* update changelog

* Change existing source script to spyglass version (#900)

* Add pynapple support (#898)

* Preliminary code

* Add retrieval of file names

* Add get_nwb_table function

* Update docstrings

* Update CHANGELOG.md

* Hot fixes for clusterless `get_ahead_behind_distance` and `get_spike_times` (#904)

* Squeeze results

* Make method and not class method

* Update CHANGELOG.md

* fix bugs in fetch_nwb (#913)

* Check for entry in merge part table prior to insert (#922)

* check for entry in merge part table prior to insert

* update changelog

* Kachery fixes (#918)

* Prioritize datajoint filepath for getting analysis file abs_path

* remove deprecated kachery tables

* update changelog

* fix lint

---------

Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com>

* remove old tables from init (#925)

* Fix improper uses of strip (#929)

Strip will remove leading characters

* Update CHANGELOG.md

* Misc Issues (#903)

* #892

* #885

* #879

* Partial address of #860

* Update Changelog

* Partial solve of #886 - Ask import

* Fix failing tests

* Add note on order of inheritace

* #933

* Could not replicate fill_nan error. Reverting except clause

* Export logger (#875)

* WIP: rebase Export process

* WIP: revise doc

* ✅ : Generate working export script

* Cleanup: Expand notebook, migrate export process from graph class to export

* Revert dj_chains related edits

* Update changelog

* Revise doc

* Address review comments #875

* Remove walrus in  eval

* prevent log on preview

* Fix arg order on fetch, iterate over restr

* Add upstream analysis files during cascade. Address false positive fetch

* Avoid regen file list on revisit node

* Bump Export.Table.restr to mediumblob

* Revise Export.Table uniqueness to include export_id

* Spikesorting quality of life helpers (#910)

* add utitlity function for finding spikesorting merge ids

* add option to select v1 sorts that didn't go through artifact detection

* add option to return merge keys as dicts for future restrictions

* Add tool to get brain region and electrode info for a spikesorting merge id

* update changelog

* style cleanup

* style cleanup

* fix restriction bug for curation_id

* account for change or radiu_um argument name in spikeinterface

* only do joins with metric curastion tables if have relevant keys in the restriction

* Update tutorial to use spikesorting merge table helper functions

* fix spelling

* Add logging of AnalysisNwbfile creation time and file size (#937)

* Add logging for any func that creates AnalysisNwbfile

* Migrate create to top of respective funcs

* Use pathlib for file size. Bump creation time to top of  in spikesort

* Clear pre_create_time on create

* get/del -> pop

* Log when file accessed (#941)

* Add logging for any func that creates AnalysisNwbfile

* Fix bug on empty delete in merge table (#940)

* fix bug on empty delete in merge table

* update changelog

* fix spelling

---------

Co-authored-by: Chris Brozdowski <Chris.Broz@ucsf.edu>

* Remove master restriction

* Part delete takes restriction from self

---------

Co-authored-by: Samuel Bray <sam.bray@ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com>
Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@bu.edu>
edeno added a commit that referenced this issue May 6, 2024
* Create class for group parts to help propagate deletes

* spelling

* update changelog

* Part delete edits (#946)

* Add spyglass version to created analysis nwb files (#897)

* Add sg version to created analysis nwb files

* update changelog

* Change existing source script to spyglass version (#900)

* Add pynapple support (#898)

* Preliminary code

* Add retrieval of file names

* Add get_nwb_table function

* Update docstrings

* Update CHANGELOG.md

* Hot fixes for clusterless `get_ahead_behind_distance` and `get_spike_times` (#904)

* Squeeze results

* Make method and not class method

* Update CHANGELOG.md

* fix bugs in fetch_nwb (#913)

* Check for entry in merge part table prior to insert (#922)

* check for entry in merge part table prior to insert

* update changelog

* Kachery fixes (#918)

* Prioritize datajoint filepath for getting analysis file abs_path

* remove deprecated kachery tables

* update changelog

* fix lint

---------

Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com>

* remove old tables from init (#925)

* Fix improper uses of strip (#929)

Strip will remove leading characters

* Update CHANGELOG.md

* Misc Issues (#903)

* #892

* #885

* #879

* Partial address of #860

* Update Changelog

* Partial solve of #886 - Ask import

* Fix failing tests

* Add note on order of inheritace

* #933

* Could not replicate fill_nan error. Reverting except clause

* Export logger (#875)

* WIP: rebase Export process

* WIP: revise doc

* ✅ : Generate working export script

* Cleanup: Expand notebook, migrate export process from graph class to export

* Revert dj_chains related edits

* Update changelog

* Revise doc

* Address review comments #875

* Remove walrus in  eval

* prevent log on preview

* Fix arg order on fetch, iterate over restr

* Add upstream analysis files during cascade. Address false positive fetch

* Avoid regen file list on revisit node

* Bump Export.Table.restr to mediumblob

* Revise Export.Table uniqueness to include export_id

* Spikesorting quality of life helpers (#910)

* add utitlity function for finding spikesorting merge ids

* add option to select v1 sorts that didn't go through artifact detection

* add option to return merge keys as dicts for future restrictions

* Add tool to get brain region and electrode info for a spikesorting merge id

* update changelog

* style cleanup

* style cleanup

* fix restriction bug for curation_id

* account for change or radiu_um argument name in spikeinterface

* only do joins with metric curastion tables if have relevant keys in the restriction

* Update tutorial to use spikesorting merge table helper functions

* fix spelling

* Add logging of AnalysisNwbfile creation time and file size (#937)

* Add logging for any func that creates AnalysisNwbfile

* Migrate create to top of respective funcs

* Use pathlib for file size. Bump creation time to top of  in spikesort

* Clear pre_create_time on create

* get/del -> pop

* Log when file accessed (#941)

* Add logging for any func that creates AnalysisNwbfile

* Fix bug on empty delete in merge table (#940)

* fix bug on empty delete in merge table

* update changelog

* fix spelling

---------

Co-authored-by: Chris Brozdowski <Chris.Broz@ucsf.edu>

* Remove master restriction

* Part delete takes restriction from self

---------

Co-authored-by: Samuel Bray <sam.bray@ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com>
Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@bu.edu>

* Fix linting

---------

Co-authored-by: Chris Brozdowski <Chris.Broz@ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@users.noreply.github.com>
Co-authored-by: Samuel Bray <samuelbray@som-dfvnn9m-lt.ucsf.edu>
Co-authored-by: Eric Denovellis <edeno@bu.edu>
@CBroz1
Copy link
Member

CBroz1 commented May 31, 2024

I have been able to replicate the Unknown column issue and discuss on DataJoint slack
image

@CBroz1
Copy link
Member

CBroz1 commented Jun 3, 2024

Submitted as datajoint 1159

@edeno
Copy link
Collaborator

edeno commented Aug 5, 2024

Merged into datajoint here: datajoint/datajoint-python#1160

But not released. Should we close @CBroz1 ?

@CBroz1
Copy link
Member

CBroz1 commented Aug 5, 2024

Yes, I think we can close. We already depend on the unreleased version for password management. I'm hoping they'll be able to make a release after 1158

@CBroz1 CBroz1 closed this as completed Aug 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working infrastructure Unix, MySQL, etc. settings/issues impacting users
Projects
None yet
Development

No branches or pull requests

4 participants