I train on a local GPU (RTX 3090) with 24 GB of memory. You can reduce the train_batch_size and eval_batch_size to avoid running out of CUDA memory.

Google Colab provides a GPU with 12 GB of memory so you will probably be able to run this code there if you half the train_batch_size and eval_batch_size.

Good luck with your final year project!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store