Funding for Agri-food Data Canada is provided in part by the Canada First Research Excellence Fund
The next stop on our RDM travels is “Documenting your work”. Those 3 words can scare a lot of people – let’s face it that means spending time writing things down, or creating scripts, or it could be viewed as taking time away from conducting research and analysis. Yes, I know, I know – and anyone who has worked with me in the past, knows that I value documentation VERY highly! Without documentation, your data is valuable to YOU at this moment, but 6 months or 5 years down the road, without documentation it may become useless. On this note, before I start talking about the details of documenting your work, I would like to share the Data Sharing and Management Snafu in 3 Short Acts video. I cannot believe that this video is 10 years old – but it still SO relevant. If you have not seen it, please watch it! It highlights WHY we are talking about RDM – but near the end it deals with our topic today – documenting your data.
Reference: NYU Health Sciences Library. “Data Sharing and Management Snafu in 3 Short Acts” YouTube, 19 Dec 2012, https://www.youtube.com/watch?v=N2zK3sAtr-4.
So let’s talk about variable names for your statistical analyses. Creating variable names is usually done with statistical analyses packages in mind. Let’s be honest we only want to create the variable names once – if we have to rename them – we increase our chances of introducing oddities in our analyses and outputs. Hmm… could I be talking about personal experiences? How many times, in the past, have I fallen into the trap of naming my variables V1, V2, V3, etc… or ME1 or ME_final? It is so easy to fall into these situations especially when we have a deadline looming. So let’s try to build some habits that will help us avoid these situations and help us create data documentation that can eventually be shared and understood by researchers outside of our inner circle. A great place to begin is by reviewing the naming characteristics of the most popular packages used by University of Guelph researchers – based on a survey I conducted in 2017.
SAS: 32 characters long
Stata: 32 characters long
Matlab: 32 characters long
SPSS: 64 bytes long = 64 characters in English or 32 characters in Chinese
R: 10,000 characters long
SAS: MUST be a letter or an underscore
Stata: MUST be a letter or an underscore
Matlab: MUST be a letter
SPSS: MUST be a letter, an underscore or @,#,$
R: No restrictions found
SAS: NOT allowed
Stata: NOT allowed
Matlab: No restrictions found
SPSS: ONLY Period, @ are allowed
R: ONLY Period is allowed
SAS: Mixed case –Presentation only
Stata: Mixed case – Presentation only
Matlab: Case sensitive
SPSS: Mixed case – Presentation only
R: Case sensitive
Based on the naming characteristics listed above the following is a list of Recommended Best Practices to consider when naming your variables:
Heading in Excel or description of the measure to be taken → variable name to be used in a statistical analysis
Diet A → diet_a
Fibre length in centimetres → fibre_cm
Location of farm → location
Price paid for fleece → price
Weight measured during 2nd week of trial → weight2
Let’s ALWAYS ensure that we are keeping the descriptive part or label for the variable name documented. Check out the Semantic Engine, an easy to use tool to document your dataset!
Variable names are only one piece of the documentation for any study, but it’s usually the first piece we tend to work on as we collect our data or once we start the analysis. Next RDM post I will talk about the other aspects of documentation and present different ways to do it.
Show of hands – how many people reading this blog know where their current research data can be found on their laptop or computer? No peaking and no searching! Can you tell me where your data is without looking? Let’s be honest now! I suspect a number do know where your data is, but I will also suggest that a number of you do not. When I hold consulting meetings with students and researchers, quite often I get the “just a minute let me find my data.” “oh that’s not the right one” “it should be here, where did it go?” “I made a change last night and I can’t remember what I called it”. Do any other these sound a little familiar? There’s nothing wrong with any of this, I will confess to saying a lot of these myself – and I teach this stuff – you would think I, of all people should know better. But, we’re all human, and when it gets busy or we get so involved with our work, well…. we forget and take shortcuts.
So, what am I going on about? Organizing your data! Let’s take this post to walk through some recommended best practices.
Consider creating an acronym for your project and creating a folder for ALL project information. For example, I have a project working with my MSc data on imaging swine carcasses. I have a folder on my laptop called RESEARCH, then I have a folder for this project called MSC_SCI. Any and all files related to this project can be found in this folder. That’s step one.
I like to create a folder structure within my project folder. I create a folder for admin, analysis_code, data, litreview, outputs, and anything that seems appropriate to me for the project. Within each of these folders I may create subfolders. For example, under admin, I usually have one for budget, one for hr, and one for reports. Under the data folder I may add subfolders based on my collection procedures.
Take note that all my folders start with my project acronym – an easy way to find the project and all its associated content.
This is where the fun begins. A recommended practice is to start all of your filenames with your project acronym. Imagine doing this – whenever you need to find a file – a quick search on your computer for “MSC_SCI” will show all my files! It’s a great step towards organizing your project files. Let’s dig a little further though… What if you came up with a system for your own files where anything that contains data, has the word data in the filename, OR anything dealing with the proposal, has the work proposal in the filename? You see where I’m going right? Yes, your file names will get a little long and this is where you need to manage the length and how much description you keep in the filenames. Recommended filename length is 25 characters, but in the end, it’s up to you how long or how short your filenames are. For us mature researchers, remember the days when all names had to be 8 characters?
We all love our dates and we tend to include dates in our filenames. Easiest way to determine which was the last file edited, right? How do you add dates though? So many ways and many of them have their own challenges associated with them. The recommendation when you use dates is to use the ISO standard: YYYYMMDD. For example today, the day I am writing this post is November 17, 2023. ISO is 20231117. There is a really cool side effect of creating your dates using the ISO standard – review the next image, can you see what happened?
This is an example of a folder where I used the date as the name of the folder that contains data collected on those dates. Notice how they are in order by date collected? Very convenient and easy to see. If I used months spelled out for these dates, I would have August appearing first followed by July and June. If I had other months added, the order, at least to me would be too confusing, as our computers order strings (words) alphabetically. Try the ISO date standard, it takes a bit of getting used to, but trust me you’ll never go back.
Starting a new project with an organized folder structure and a naming convention is a fabulous start to managing your research data. As I say in class and workshops, we are not teaching anything new, we’re encouraging you to implement some of these skills into your research process, to make your life easier throughout your project.
One last note, if you are working in a lab or collaborative situation, consider creating an SOP (Standard Operating Procedure) guide outlining these processes and how you would like to set it up for your lab / group project.
Next stop will be documenting your work.
Recommendation: Long format of datasets is recommended for schema documentation. Long format is more flexible because it is more general. The schema is reusable for other experiments, either by the researcher or by others. It is also easier to reuse the data and combine it with similar experiments.
Data must help answer specific questions or meet specific goals and that influences the way the data can be represented. For example, analysis often depends on data in a specific format, generally referred to as wide vs long format. Wide datasets are more intuitive and easier to grasp when there are relatively few variables, while long datasets are more flexible and efficient for managing complex, structured data with many variables or repeated measures. Researchers and data analysts often transform data between these formats based on the requirements of their analysis.
Format: In a wide dataset, each variable or attribute has its own column, and each observation or data point is a single row. This representation is typically seen in Excel.
Structure: It typically has a broader structure with many columns, making it easier to read and understand when there are relatively few variables.
Use Cases: Wide datasets are often used for summary or aggregated data, and they are suitable for simple statistical operations like means and sums.
For example, here is a dataset in wide format:
Format: In a long dataset, there are fewer columns, and the data is organized with multiple rows for each unique combination of variables. Typically, you have columns for “variable,” “value,” and potentially other categorical identifiers.
Structure: It is more compact and vertically oriented, making it easier to work with when you have a large number of variables or need to perform complex data transformations.
Use Cases: Long datasets are well-suited for storing and analyzing data with multiple measurements or observations over time or across different categories. They facilitate advanced statistical analyses like regression and mixed-effects modeling. In Excel you can use pivot tables to view summary statistics of long datasets.
For example, here is some of the same data represented in a long format.
Long format data is a better choice when choosing a format to be documented with a schema as it is easier to document and more clear to understand.
For example, column headers (attributes) in the wide format are repetitive and this results in duplicated documentation (e.g. HT1 is the height of the subject measured at the end of week 1; HT2 is the height of the subject measured at the end of week 2 etc.). It is also less flexible as each additional week needs an additional column and therefore another attribute described in the schema. This means each time you add a variable you change the structure of the capture base of the schema reducing interoperability.
Documenting a schema in long format is more flexible because it is more general. This makes the schema reusable for other experiments, either by the researcher or by others. It is also easier to reuse the data and combine it with similar experiments.
At the time of analysis, the data can be transformed from long to wide if necessary and many data analysis programs have specialized functions that help researchers with this task.
Written by: Carly Huitema
Anyone who knows me or has sat in one of my classes, will know how much I LOVE the data life cycle. As researchers we have been taught and embraced the research life cycle and I’m sure many of you could recite how that works: Idea → Research proposal → Funding proposal → Data collection → Analysis → Publication → A new idea – and we start again. The data part of this always seemed the part that took the longest – other than maybe the writing – and really just kind of stopped there. As a grad student, many years ago – too many to count anymore – the data was important and I worked with it, massaged it, cleaned it, re-massaged it, analyzed it – until I was happy with the results and my supervisor was happy with the results as well. Then all the work and concentration shifted gears to the chapter writing and publication. The data? Just sat there – with my MSc project the data entry pieces sat in a banker box, until my supervisor cleared out the lab and shipped that box out to me in Alberta or Ontario. So, the data lives, but in a box.
We talk about FAIR data – Findable, Accessible, Interoperable, and Reusable – um…. my MSc data? Is Findable to me – it’s here on the floor under my desk at home. Accessible? maybe -it’s a box of printouts of the raw data that was entered in 1989. Interoperable? Let’s not even think about that! Reusable? um… maybe as a foot stool! So my MSc data as I’m describing it to you right now it NOT FAIR!
Why not? Because we never thought of the data life cycle back then! Collect data, analyze data, publish!
Today, we know better!!! I look back and get sad at the thought of all the data that was collected that well…. no longer is out there – consider my last post about the OAC 150 anniversary?
Today, we strive to observe and follow the data life cycle – we should be telling data’s story – we should be managing our data so that it can be FAIR! Imagine just for a moment, if I had managed my MSc research data – who knows what further research could have been completed. Now, funny story – there was a project here at University of Guelph that was doing what I did with my MSc but with new technologies. The student who worked on the current project reached out to me to talk about my work – all I could do was tell them about my experiences. My data was inaccessible to them – and it turns out so was my thesis – only copy I had was here in my office – and there was/is no accessible PDF version of it. Now – if my data had been managed and archived (I’ll talk more about this in a later post), the student may have been able to incorporate it into her thesis work – now how cool would that have been? Imaging pigs across 30 years? But…. as we know that did not happen.
So I’m going on and on about this – reason is to convince you all – NOT to leave your data to the wayside – you need to manage your research data – you need to create documentation so that YOU can tell your data’s story once you’ve published your work, and so your data can live on, and have the opportunity to play a role in someone else’s project. I never imagined someone else doing similar work than I did 30 years ago – so you just never know!
I’m going to leave this data life cycle diagram above for you to consider. Next time I’ll start digging into the HOWs of Research Data Management (RDM) rather than the WHYs
Have you heard the news? The Ontario Agricultural College will be 150 years old in 2024. Wow!! 150 years of being recognized for our research, our students, our faculty, and our community in the areas of food, agriculture, communities and the environment. Now, as a data archivist and researcher, I only have one question: Where is all the research data collected over all these years?
Yes we can find some of the data – no worries, some may argue that the data is in the journal articles – and I may agree with you in some instances. BUT, overall, we need to come to the realization that the older data is more than likely gone and lost. Older media – 5.25″ diskettes, magnetic tapes – or older software – VPplanner, QuattroPro, my favourite Word Perfect – have led us to a time where we can no longer access the older data. Over the past few decades, data allowed us to answer our research questions, but once it completed its job, it was often left on a shelf, or in a box, or in the basement.
We MUST view and treat data as a valuable asset. Take it off the shelf, out of the box, bring it back to light and treat it as that valuable asset! Data should be viewed as gold in our research field. So, how do we do this? Quick answer is Research Data Management!
In my next blog post, I’ll talk about the Data Life Cycle and start digging into the details of what YOU can do to make your data available for our future students and researchers.