Skip to content

Peer patch (fold, sub loc, proteinnet, ssp)#139

Merged
taylormjs merged 11 commits intomainfrom
peer-patch
Jul 10, 2025
Merged

Peer patch (fold, sub loc, proteinnet, ssp)#139
taylormjs merged 11 commits intomainfrom
peer-patch

Conversation

@taylormjs
Copy link
Collaborator

@taylormjs taylormjs commented Jul 8, 2025

Description

Fixing four PEER tasks: fold, subcellular localization, proteinnet & secodary structure & updating base linear probe callback accordingly. All changes are backwards compatible

Update: omitting ProteinNET as a default task because of the O(N^2) memory for residue-residue level contact map prediction. Will update in a separate MR with outer-product mean on lower-rank tensors

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Performance improvement
  • Code refactoring

Testing

  • Tests pass locally
  • Added new tests for new functionality
  • Updated existing tests if needed

Checklist

  • Code follows style guidelines
  • Self-review completed
  • Documentation updated if needed
  • No breaking changes (or clearly documented)

@taylormjs taylormjs marked this pull request as ready for review July 9, 2025 17:55
@taylormjs taylormjs assigned taylormjs and ncfrey and unassigned taylormjs Jul 9, 2025

Handles variable-length tensors that can't be stacked by default collation.
"""
inputs = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about

inputs, targets = zip(*batch)
inputs = list(inputs)
targets = list(targets)

PEERTask.PDBBIND,
}

# Define high memory-intensive tasks that need batch_size=1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

special handling = small batch size?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'll clarify

with torch.no_grad():
for batch in dataloader:
x, y = batch
def _get_standard_embeddings(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this module is getting pretty big - maybe some of these convenience functions can be factored out into a _peer_utils.py

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, just simplified things quite a bit

@taylormjs taylormjs merged commit f7e40eb into main Jul 10, 2025
5 checks passed
@taylormjs taylormjs deleted the peer-patch branch July 10, 2025 03:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants