Requesting DIY recipe critique of drknexus


#1

Can I get a critique of drknexus’ basic v1.0; 99.994% Amazon Sourced DIY recipe and associated nutrient profile? I think I have a good basic recipe here, but I’d like to get more eyes on it and see if I’ve made any potentially concerning errors. The whole thing comes in (right now) at $8.07 per day with only 9 ingredients (2 of which are just to push the macros to the right place). I’m also looking for suggestions as to how to push the fat and carb macros around without needing to restructure things considerably; I’m especially interested in pushing the carb macro without needing maltodextrin - has anybody tried potato starch? The only other issue I know of with the receipe so far is that hemp hearts don’t mix particularly well.


#2

I have found it hard to really make suggestions without tearing apart the recipe, adding adjusting, etc. I found the best way is to search through the 861 ingredients to the modification I need and trial and error. Each Add/Remove/Change affect so many other numbers, it just comes down to trial and error, mix it and see if it is something you could live on. I myself and now trying to change Doc’s bachelor chow to get it closer to a 40/30/30 (carb/protein/fat) ratio but haven’t been able to work it out (nor spend the time). For now Ill just stick with what I have.

John
Day 4


#3

Oh, that hasn’t been my approach at all. I select ingredients, run it through several different runs of a SANN optimizer and then take the output across to the website and do manual tweaking from there. One of the ‘innovations’ in my recipe is the use of three new ingredient classes that don’t seem to be in the shared ingredient list, the hemp hearts, the Muscle Milk, and the Raw Meal (at least as far as I can tell) aren’t in the shared ingredient space.


#4

Now you just peaked my interested. I really had no idea or concept someone using optimization routines to develop the recipe. If you are so inclined, I would love to read your approach and methodologies.

I just might have to put on my programmer hat for this one.


#5

Thanks for reminding me cohron. I had intended to do that on github, but got so into sorting out my own recipe that I forgot about the meta project I have on github. I am using R, and I know that isn’t everyone’s cup of tea.

It has been an interesting problem. As you might imagine, all of this is massively multivariate. Finding the right optimization approach has been hard. My SANN approach is sort of brute force, but there are just tons of local minima to deal with when optimizing. There is also the challenge of reducing the ‘goodness’ of a recipe down to a single number for the optimizer. What makes a recipe good? Simplicity? Cheapness? Being in acceptable range of nutrients? How do you manage the tradeoffs between the various needs of your recipe? Etc etc etc.

I have updated my code to include the wide version of the USDA database. I can’t build the wide .json entry, but hopefully some of the others I’ve posted are worthwhile or provide you a good data source to start with. If you have some other preferred format for the data, let me know. In coming weeks, if there is an interest, I’ll start sharing my optimizers and other approaches.


#6

I am trying to get a handle on R using aRgh documentation. I must confess this will not likely be a direction I can go. And not being a github user is limiting my ability to find the ‘meat’ (logic) of your work. I found the json nutritional files that I think came from USDA (btw ver 26 is up since Oct now if that matters). But I haven’t found your R source code.

If time permits I may try to interpret what you have written to something familiar to me, vb.net and perhaps get it on the web somehow.

Did your optimizer use the entire potential 8,400 food items listed to find the best/optimal ingredients to include that meet your requirements (powder, few ingredients, etc)
Do you have anyone using your recipe currently (yourself included), for how long and any results to report. I am likely changing to your recipe soon.


#7

The logic isn’t up yet. I understand not wanting to pick up a whole new programming language. That being said, if you were thinking about R… aRgh is not (to my mind) a good way to start.

Thanks for letting me know the new version of the USDA was up. I had a set back in putting up anything useful because I decided that I wasn’t content with the Muscle Milk (bad press about lead levels).

My optimizer used a limited set of ingredients because the USDA database doesn’t have information about cost or form factor (as far as I know). It would be possible (in theory) to optimize down from the full db with an aim of minimizing ingredients etc. However, with the full 8.4k items it would be an untenable problem space.

I think the key to good optimization is defining a penalty/fit space that matches your aims. I stumbled around on this count a fair bit. One thing to be mindful of is that most optimization routines will work better on smooth surfaces than discontinuous ones. So, while it is true that many might not care much about the level of micronutrient X so long as it is between the RDA and the UTL, your optimizer will be happy if you select a sweet spot in the middle of those points and ding it a bit for drifting away from that ideal. Then you want to ding it even harder as it separates further. I suspect this isn’t just a thing that is good for the optimizer, it is probably just smart. Given that not all nutrients are reported on the label, it probably isn’t wise to be treading right up against the UTL habitually. back to function fitting… getting the shape of the functions right is critical, and I haven’t figured it out yet.

Then, you have priorities… e.g. don’t exceed UTLs, don’t fall below RDAs, have 2000 calories, have a good ratios of particular micros… etc. These characteristics all need to be given weights and reduced down into the same resulting value. Towards this end, I’ve found it helpful to judge the fit by (over UTL amount)/(UTL) and (under RDA amount)/(RDA), that way the scale / exact amounts of the nutrients don’t influence the fit. Presumably you could also reweight the nutrients accord to how much you cared about the fit. For example, maybe you think the UTL for niacin is bogus and don’t want to respect it… etc. I’ve been playing around with the idea that in the “acceptable” range, priorities should be ranked by the order of magnitude associated with their importance… although this can lead to undesirable thrashing of optimization trying to get your micros in your ‘optimal’ range when you really are okay with where they are and wish the macros were in better shape. I’ve been tempted to change the fit rule as the fit improves… but changing the shape of the fit space out from under an optimizer mid-optimization seems like it won’t quite work well.

So here is a snippet of my pseudo code, it is very crude. Comments are after the #… you don’t need to know the language only that <- is an assignment operator.

  # take a string of numbers and turn them into a matched set of food items by quantity
  # multiply the nutrition information by the quantity and sum it
  # check each nutrient against a list of DRIs and come up with a penalty per nutrient for having too little
 # check each nutrient against a list of UTLs and come up with a penalty per nutrient for having too much
# I also run in an interactive mode when I want to see how the fit number was arrived at... when I blow the doors off a target, I want to see the foods and how much the contributed to it.  I also have some nutrients where I am looking at the source of the nutrient and caring less if the source is not artificial, e.g. Vitamin A.
  if (interactive) {
    if(sur$TooMuch$`Vitamin A, IU_IU`){
      cat("Too much Vitamin A printing details, be sure sources are natural over",UTL$`Vitamin A, IU_IU`,"IU \n")
    }
    if(sur$TooMuch$`Magnesium, Mg_mg`){
    }
    #printing frames where there is a problem with the level
    badLevels <- c(names(sur[["TooMuch"]])[sur[["TooMuch"]]==TRUE],def[[1]])
    for (badLevel in badLevels) {
      print(data.table(USDAwide[Long_Desc %in% names(fitting),Long_Desc],USDAwide[Long_Desc %in% names(fitting),badLevel,with=FALSE]*USDAwide[,unlist(fitting[as.character(USDAwide[Long_Desc %in% names(fitting),Long_Desc])])]))
    }
    print("Problems with:")
    print(badLevels)
  }

… anyway - I’m sure you get the basic idea. The specifics of the optimizer I use are all a black box for me. I found that (of those I tried) simulated annealing worked well, but there was still no substitute for having lots of starting points (a good parallel programming problem).