Home » Precision Ag
Category Archives: Precision Ag
After discussions with producers in southern Kansas I felt the need to bring back this past blog. It seems that much of (not all) the early planted wheat lost a significant amount of biomass during the winter and the N-Rich Strip GreenSeeker approach is producing what looks to be low yield potentials and N-Rate recommendations. This should be treated much like we do grazed wheat and the planting date should be adjusted, see below. It is also important to note that in the past year a new wheat calculator was added to the NUE Site. http://nue.okstate.edu/SBNRC/mesonet.php. Number 1 is the original OSU SBNRC but the #2 is calculator produced by a KSU/OSU cooperative project. This is the SBNRC I recommend for use in Kansas and much of the norther tier of counties in OK.
Original Blog on Freeze Damage and the GreenSeeker.
Dr. Jeff Edwards “OSUWheat” wrote about winter wheat freeze injury in a receive blog on World of Wheat, http://osuwheat.com/2013/12/19/freeze-injury/. As Dr. Edwards notes injury at this stage rarely impact yield, therefore the fertility requirements of the crop has not significantly changed. What will be impacted is how the N-Rich Strip and GreenSeeker™ sensor will be used. This not suggesting abandoning the technology in fact time has shown it can be just as accurate after tissue damage. It should be noted GreenSeeker™ NDVI readings should not be collected on a field that has recently been damaged.
A producer using the N-Rich Strip, GreenSeeker™, Sensor Based N-Rate Calculator approach on a field with freeze damage will need to consider a few points. First there need to be a recovery period after significant tissue damage; this may be one to two weeks of good growth. Sense areas that have had the same degree of damage as elevation and landscape position often impacts the level of damage. It would be misleading to sense a area in the N-Rich strip that was not significantly damaged but an area in the Farmer Practice that took a great deal of tissue loss.
Finally we must consider how the SBNRC, available online at http://nue.okstate.edu/SBNRC/mesonet.php, works. The calculator uses NDVI to estimate wheat biomass, which is directly related to grain yield. This predicted grain yield is then used to calculate nitrogen (N) rate. So if biomass is reduced, yield potential is reduced and N rate reduced. The same issue is seen in dual purpose whet production. So the approach that I recommend for the dual purpose guys is the same that I will recommend for those who experienced significant freeze damage. This should not be used for wheat with just minimal tip burn.
To account for the loss of biomass, but not yield, planting date needs to be adjusted to “trick” the calculator into thinking the crop is younger and has greater potential. Planting date should be move forward 7 or 14 days dependent For example the first screen shot shows what the SBNRC would recommend using the real planting date. In this case the potential yield is significantly underestimated.
The second and third screen shots show the impact of moving the planting date forward by 7 and 14 days respectively. Note the increase in yield potential, which is the agronomically correct potential for field considering soil and plant condition, and increase in recommended N-rate recommendation. Adjust the planting date, within the 7 to 14 day window, so that the yield potential YPN is at a level suitable to the field the yield condition and environment. The number of days adjusted is related to the size and amount of loss. The larger the wheat and or greater the biomass loss the further forward the planting date should be moved. In the example below YPN goes from 37 bu ac on the true planting date to 45 bu ac with a 14 day correction. The N-rate changes from 31 lbs to 38 lbs, this change may not be as much as you might expect. That is because YP0, yield without additional N, also increases from 26 to 32 bushel.
This adjustment is only to be made when tissue has been lost or removed, not when you disagree with the yield potential. If you have any questions about N-Rich Strips, the GreenSeeker™, or the online SBNRC please feel free to contact me at firstname.lastname@example.org or 405.744.1722.
I recently wrote a article for the Crops and Soils magazine on the components of a Variable Rate Nitrogen Recommendation. The people at the American Society of Agronomy headquarters were kind enough to make it open access. What follows in this blog is just a highlight reel. For the full article visit https://dl.sciencesocieties.org/publications/cns/articles/49/6/24
Components of a variable rate nitrogen recommendation
Variable-rate nitrogen management (VRN) is a fairly hot topic right now. The outcome of VRN promises improved efficiencies, economics, yields, and environmental sustainability. As the scientific community learns more about the crop’s response to fertilizer nitrogen and the soil’s ability to provide nitrogen, the complexity of providing VRN recommendations, which both maximize profitability and minimize environmental risk, becomes more evident.
The components of nitrogen fertilizer recommendations are the same whether it is for a field flat rate or a variable-rate map. The basis for all N recommendations can be traced back to the Stanford equation (Stanford, 1973). At first glance, the Stanford equation is very basic and fairly elegant with only three variables in the equation.
Historically, this was accomplished on a field level through yield goal estimates and soil test nitrate values. The generalized conversions such as 1.2 lb N/bu of corn and 2.0 lb N/bu of winter wheat took account for Ncrop and efert to simplify the process.
The basis for Ncrop is grain yield × grain N concentration. As grain N is fairly consistent, the goal of VRN methods is to identify grain yield. This is achieved through yield monitor data, remote sensing and crop models.
The N provided by, or in some cases removed by, the soil is dynamic and often weather dependent. Kindred et al. (2014) documented the amount of N supplied by the soil varied spatially by 107, 67, and 54 lb/ac across three studies. Much of the soil N concentration is controlled by OM. For every 1% OM in the top 6 inches of the soil profile, there is approximately 1,000 lb N/ac.
Historically, the efficiency at which N fertilizer is utilized was integrated into N recommendations and not provided as an input option, e.g., the general conversion factor for corn of 1.2 lb N/bu. Nitrogen concentration in corn grain ranges from 1.23–1.46% with an average of 1.31% (Heckman et al., 2003) or 0.73 lb N/bu. Therefore, the 1.2-lb value is assuming a 60% fertilizer use efficiency. More recently, recommendations have been to incorporate application method or timing factors in attempt to account for efficiencies.
While a VRN strategy that works across all regions, landscapes, and cropping systems has yet to be developed, the process of nitrogen management has greatly improved and is evolving almost daily. Those methods that are capable of determining the three inputs of the Stanford equation while incorporating regional specificity will capture the greatest level of accuracy and precision. Ferguson et al. (2002) suggested that improved recommendation algorithms may often need to be combined with methods (such as remote sensing) to detect crop N status at early, critical growth stages followed by carefully timed, spatially adjusted supplemental fertilization to achieve optimum N use efficiency. As information and data are gathered and incorporated and data-processing systems improve in both capacity and speed, the likelihood of significantly increasing nitrogen use efficiency for the benefit of the society and industry improves. The goal of all practitioners is to improve upon the efficiencies and economics of the system, and this should be kept in mind as new techniques and methods are evaluated. This improvement can be as small as a few percentages
This article is published in the Crops and Soils Magazine doi:10.2134/cs2016-49-0609. The full article includes more details on the components plus concepts of integration.
Published in Progressive Forage http://www.progressiveforage.com/ 9.1.2016
First, let’s agree the term “precision” is relative. Forage is a diverse system with an even more diverse set of management strategies. Regardless, every manager should be constantly striving to improve the precision in which nutrients are managed. The ultimate goal of any precision nutrient management tool should be this: producing the highest quality output (in this case forage) with the least amount of input – ultimately, optimizing efficiencies and maximizing profits. Within this readership there are those who are soil sampling at a 1-acre resolution and others who have likely not pulled a soil sample in the past decade. For both spectrums we can make improvements – let’s start basic and move forward.
A soil sample should the basis for all nutrient management decisions. Is soil testing a perfected science? No, far from it. However, there must be a starting point. A soil sample is that first bit of information we can start with and the basic data collection for precision ag to make improved management decisions. When fertilizer is applied without a recent soil sample, it is done based upon pure guesswork. How many other management decisions are made on a farm or ranch by a guess?
The composite soil sample is a great start, but it is just that – a start. While there are some soils that are very uniform most are extremely variable. In a survey of 178 fields in the southern Great Plains on average the soil pH was 6.12; phosphorus (Mehlich 3 phosphorus [M3P] and Bray 1 phosphorus [B1P]) was 28 ppm while soil test potassium averaged 196 ppm. So on the average the primary components of soil fertility were okay. However, on average the 178 fields had a range in soil pH of 1.8 units, M3P and B1P both had range of a 52 ppm and STK had a range of 180 ppm.
Table 1 shows the minimum and maximum soil test values for the 178 fields.
This data helps support the concept that we should find ways to increase the resolution or decrease the number of acres represented by a single soil sample. Increasing soil sample resolution is typically done using one or two methods – zone or by grid.
The basis of a zone sample is creating a smaller field. The biggest question with zones is how to draw the lines. There are dozens, if not hundreds, of possible methods, each having their own reasons and benefits. My basic recommendation is that before lines are drawn goals have to be established. For example, if phosphorus or soil pH management is important, the basis for the lines should be soil based. This could be based on soils map, soil texture, slope and on and on. If the target is improved nitrogen management, then the reason for drawing lines should be yield based. This could be based on yield maps, aerial images, historic knowledge or many soil parameters.
Why does it matter? Two reasons: First, across the broad spectrum of soils and environments two nutrients are hardly spatially correlated, which means the zone that is best at describing phosphorus variability does an extremely poor job describing potassium variability. Second, more theoretically the demand for nutrients are driven by different factors. Phosphorus (a soil immobile nutrient) fertilizer need is driven by the soil P concentration (look up Brays Sufficiency Concept). Many use yield as a parameter for phosphorus application, but this is not a plant need or even a yield maximizing practice. Fertilizing based on removal is done to prevent nutrient mining. However, nitrogen (a nutrient mobile in the soil) fertilizer need is based on yield and crop removal. Hence, the common Land Grant University N and sulfur recommendations are yield goal based.
To be honest even the experts disagree on the hows, whys and ifs of grid sampling. I like data, therefore I naturally lean towards grid sampling if the field warrants it. For me, the biggest benefit of grid over zone sampling is that soils data from zone samples are biased to whatever parameter was set for the zone and therefore any resulting map for all nutrients must reflect the original zones. In a grid, each data point is independent therefore the maps of each nutrient can be independent, and (the science tells us) in most cases nutrients are independent of each other.
Ideally two pieces of information are available for determining whether a field is grid sampled or not. The first piece of information is a yield map from any previous crop. If yield is fairly uniform, I question the need for variable rate management, much less the expense of grid sampling. Regardless of the sampling method zone or grid, the discussion is moot if spatial variability does not exist across the field. However, many forage producers may not have access to this kind of data.
One of the most useful decision aid tools for grid sampling is the composite soil sample. The reason is simple statistics: A composite sample should be a representative average of the field. If the data is normally distributed, that means half of the field is above and half the field is below the sample average. So the optimum fields to grid are those in which an input falls at the point in which the benefit of applying is in question, because it suggests that approximately half the field needs the inputs while the other half likely does not. It is in this scenario that the return on investment can be greatest. As with pH, for example, fields with a very low value should have a flat broadcast application and should be sampled again at a later date. Fields with a composite pH well above 6.0 will unlikely have enough acres needing lime to warrant sending out an applicator.
Is grid sampling a lifelong activity? No. The initial activity of grid sampling will provide both an indicator of the variability level and overall needs of the field. From that point, decisions can be made and actions taken. Identify the greatest limiting factor in the field based on the samples, and focus on impacting change upon it. Zone sampling in subsequent years can be utilized to document change. When that issue is resolved, move to the next factor. It may require grid sampling again or using the original grid to develop new management zones. For instance, if the greatest issue first identified on the field is soil acidity then after the soil pH is neutralized the field should be grid sampled again. The reason is for this is that changing soil pH will influence many nutrients and the amount of change is not consistent but dependent upon many other factors.
In precision ag we tend to look at layers, yield, soil, etc. However, none of these tell the whole story independently. An area in a field may have moderate soil fertility and be under producing. Using the data collected the decision may be made to increase inputs; yet, the issue is a shallow restrictive layer limiting production. Therefore, the extra inputs will be of no benefit and could even further reduce production. It is at this point I like to bring out the importance of “getting dirty.” There is no technology that can take the place of “boots on the ground” agronomy.
For producers who have historically preformed intensive soil sampling there is still room for improvement. Soil testing and nutrient management is not an exact science; in fact, it was originally built for broad sweeping, statewide recommendations. As technology advances and inputs can be applied at sub-acre resolutions, all of the environment (weather, soil) by genotype inactions becomes more evident.
The next step in precision ag is to develop recommendations by upon site specific crop responses. This is where nutrient response strips can further improve nutrient use efficiencies and crop production. In Oklahoma, nitrogen-rich strips are applied across fields (grain and forage) to determine in-season nitrogen needs. Having a strip in the field with 50 to 100 extra units of N acts as a management tool which takes into account soil, environment and plant need. If the strip is visible the field or zone needs more N, if it is not visible then the crop is not deficient and at that point in the season does not need more N. Producers have taken this approach for N and adopted it for P and K with strips across the field with a zero and high rate of either nutrient. After a few seasons, responsive and non-responsive zones are developed and P and K applications are managed accordingly.
One misconception of precision ag is that the end result should be a field with uniform yield from one corner to the other. This is often not the case; in fact, in many cases the variability in production across the field can be increased. Theoretically, precision ag is applying inputs at the right rate in the right place. This means areas of the field which are yield limited due to underlying factors which cannot be managed have a reduction in inputs with no effect on yield. Other areas of the field have not been managed for maximum production therefore an increase inputs result in increasing yield widening the gap between the low and high yield levels.
Regardless of where a producer currently sets on the technology curve, there are potential ways to increase productivity and efficiency. There is nothing wrong with taking baby steps; it is often the simple things that lead to the greatest return.
With the most recent FAA UAV announcement my phone has been ringing with excited potential UAV users. Two points always comes up in the conversation. NDVI (normalized difference vegetation index) and image resolution. This blog will address the use of NDVI, resolution will come later. Before getting into the discussion, what NDVI is should be addressed. As described by Wikipedia, NDVI is a simple graphical indicator that can be used to analyze remote sensing measurements, typically but not necessarily from a space platform, and access whether the target being observed contains live green vegetation or not. NDVI is a mathematical function of the reflectance values of two wavelengths regions, near-infrared (NIR) and visable (commonly red).
The index NDVI has been tied to a great number of crop factors, the most important being biomass. Biomass being important as most things in the plant world impact biomass and biomass is related to yield. The most challenging issue with NDVI is it is highly correlated with biomass and a plants biomass is impacted by EVERYTHING!!!! Think about it, how many things can impact how a plant grows in a field.
The kicker that most do not know is that all NDVI’s values are not created equal. The source of the reflectance makes a big difference.
Measuring reflectance requires a light source, this is where the two forms of NDVI separate. Passive sensors measure reflectance using the sun (natural light) as a light source while active sensors measure the reflectance from a known light source (artificial light). The GreenSeeker is a good example of a active sensor, it emits its own light using LEDs in the sensor while satellite imagery is the classic passive sensor.
The challenge with passive remote sensing lies within the source of the light. Solar radiation and the amount of reflectance is impacted by atmospheric condition and sun angle to name a few things. That means without constant calibration, typically achieved through white plate measurements, the values are not consistent over time and space. This is the case whether the sensor is on a satellite or held held. In my research plots where I am collecting passive sensor data, so that I can measure all wavelength, I have found it necessary to collected a white plate calibration reading every 10 to 15 minutes of sensing. This is the only way I can remove the impacts of sun angle and cloud cover. When using the active sensors as long as the crop does not change the value is calibrated and repeatable.
What does this mean for those wanting to use NDVI collected from a passive sensor (satellite, plane, or UAV)? Not much if the user wants to distinguish or identify high biomass and low biomass areas. Passive NDVI is a great relative measurement for good and bad. However many who look at the measurements over time notice the values can change significantly from one day to the next. The best example I have for passive NDVI is a yield map with no legend. Even the magnitude of change between high and low is difficult to determine.
Passive NDVI in the hands of an agronomist or crop scout can be a great tool to identify zones of productivity. It becomes more complicated when decisions are made solely upon these values. One issue is this is a measure of plant biomass. It does nothing to tell us why the biomass production is different from one area to the next. That is why even with an active sensor OSU utilizes N-Rich Strips (N-Rich Strip Blog). The N-Rich Strip tells us if the difference is due to nitrogen or some other variable. We are also looking into utilizing P, K, and lime strips throughout fields. Again a good agronomist can utilize the passive NDVI data by directing sampling of the high and low biomass areas to identify the underling issues creating the differences.
OkState has been approached by many UAV companies to incorporate our nitrogen rate recommendation into their systems. This is an even greater challenge. Our sensor based nitrogen rate calculator (SBNRC blog) utilizes NDVI to predict yield based upon a model built over that last 20 years. That means to correctly work the NDVI must be calibrated and accurate to a minimum of 0.05 level (NDVI runs from 0.0 to 1.0). To date none have been able to provide a mechanism in which the NDVI could be calibrated well enough.
NDVI values collected with a passive sensor, regardless of the platform the sensor is on, has agronomic value. However its value is limited if the user is trying to make recommendations. As with any technology, to use NDVI you should have a goal in mind. It may be to identify zones or to make recommendations. Know the limitations of the technology, they all have limitations, and use the information accordingly.