Skip to main content
eScholarship
Open Access Publications from the University of California

This series is automatically populated with publications deposited by UCLA Henry Samueli School of Engineering and Applied Science Department of Civil and Environmental Engineering researchers in accordance with the University of California’s open access policies. For more information see Open Access Policy Deposits and the UC Publication Management System.

Cover page of Factors and Processes Affecting Delta Levee System Vulnerability

Factors and Processes Affecting Delta Levee System Vulnerability

(2016)

We appraised factors and processes related to human activities and high water, subsidence, and seismicity. Farming and drainage of peat soils caused subsidence, which contributed to levee internal failures. Subsidence rates decreased with time, but still contributed to levee instability. Modeling changes in seepage and static slope instability suggests an increased probability of failure with decreasing peat thickness. Additional data is needed to assess the spatial and temporal effects of subsidence from peat thinning and deformation. Large-scale, state investment in levee upgrades (> $700 million since the mid-1970s) has increased conformance with applicable standards; however, accounts conflict about corresponding reductions in the number of failures.

Modeling and history suggest that projected increases in high-flow frequency associated with climate change will increase the rate of levee failures. Quantifying this increased threat requires further research. A reappraisal of seismic threats resulted in updated ground motion estimates for multiple faults and earthquake-occurrence frequencies. Estimated ground motions are large enough to induce failure. The immediate seismic threat, liquefaction, is the sudden loss of strength from an increase in the pressure of the pore fluid and the corresponding loss of inter-particle contact forces. However, levees damaged during an earthquake that do not immediately fail may eventually breach. Key sources of uncertainty include occurrence frequencies and magnitudes, localized ground motions, and data for liquefaction potential.

Estimates of the consequences of future levee failure range up to multiple billions of dollars. Analysis of future risks will benefit from improved description of levee upgrades and strength as well as consideration of subsidence, the effects of climate change, and earthquake threats. Levee habitat ecosystem benefits in this highly altered system are few. Better recognition and coordination is needed among the creation of high-value habitat, levee needs, and costs and benefits of levee improvements and breaches.

Proteomics insights into the fungal-mediated bioremediation of environmental contaminants

(2024)

As anthropogenic activities continue to introduce various contaminants into the environment, the need for effective monitoring and bioremediation strategies is critical. Fungi, with their diverse enzymatic arsenal, offer promising solutions for the biotransformation of many pollutants. While conventional research reports on ligninolytic, oxidoreductive, and cytochrome P450 (CYP) enzymes, the vast potential of fungi, with approximately 10 345 protein sequences per species, remains largely untapped. This review describes recent advancements in fungal proteomics instruments as well as software and highlights their detoxification mechanisms and biochemical pathways. Additionally, it highlights lesser-known fungal enzymes with potential applications in environmental biotechnology. By reviewing the benefits and challenges associated with proteomics tools, we hope to summarize and promote the studies of fungi and fungal proteins relevant in the environment.

Cover page of ZeroCAL: Eliminating Carbon Dioxide Emissions from Limestones Decomposition to Decarbonize Cement Production.

ZeroCAL: Eliminating Carbon Dioxide Emissions from Limestones Decomposition to Decarbonize Cement Production.

(2024)

Limestone (calcite, CaCO3) is an abundant and cost-effective source of calcium oxide (CaO) for cement and lime production. However, the thermochemical decomposition of limestone (∼800 °C, 1 bar) to produce lime (CaO) results in substantial carbon dioxide (CO2(g)) emissions and energy use, i.e., ∼1 tonne [t] of CO2 and ∼1.4 MWh per t of CaO produced. Here, we describe a new pathway to use CaCO3 as a Ca source to make hydrated lime (portlandite, Ca(OH)2) at ambient conditions (p, T)-while nearly eliminating process CO2(g) emissions (as low as 1.5 mol. % of the CO2 in the precursor CaCO3, equivalent to 9 kg of CO2(g) per t of Ca(OH)2)-within an aqueous flow-electrolysis/pH-swing process that coproduces hydrogen (H2(g)) and oxygen (O2(g)). Because Ca(OH)2 is a zero-carbon precursor for cement and lime production, this approach represents a significant advancement in the production of zero-carbon cement. The Zero CArbon Lime (ZeroCAL) process includes dissolution, separation/recovery, and electrolysis stages according to the following steps: (Step 1) chelator (e.g., ethylenediaminetetraacetic acid, EDTA)-promoted dissolution of CaCO3 and complexation of Ca2+ under basic (>pH 9) conditions, (Step 2a) Ca enrichment and separation using nanofiltration (NF), which allows separation of the Ca-EDTA complex from the accompanying bicarbonate (HCO3 -) species, (Step 2b) acidity-promoted decomplexation of Ca from EDTA, which allows near-complete chelator recovery and the formation of a Ca-enriched stream, and (Step 3) rapid precipitation of Ca(OH)2 from the Ca-enriched stream using electrolytically produced alkalinity. These reactions can be conducted in a seawater matrix yielding coproducts including hydrochloric acid (HCl) and sodium bicarbonate (NaHCO3), resulting from electrolysis and limestone dissolution, respectively. Careful analysis of the reaction stoichiometries and energy balances indicates that approximately 1.35 t of CaCO3, 1.09 t of water, 0.79 t of sodium chloride (NaCl), and ∼2 MWh of electrical energy are required to produce 1 t of Ca(OH)2, with significant opportunity for process intensification. This approach has major implications for decarbonizing cement production within a paradigm that emphasizes the use of existing cement plants and electrification of industrial operations, while also creating approaches for alkalinity production that enable cost-effective and scalable CO2 mineralization via Ca(OH)2 carbonation.

Cover page of Implications of mHVSR Spatial Variability on Site Response Predictability

Implications of mHVSR Spatial Variability on Site Response Predictability

(2024)

One-dimensional ground response analyses (GRA) can introduce model error to site response estimates when wave propagation is not dominated by vertically propagating shear waves. We identify sites suitable for GRA based on microtremor horizontal-to-vertical spectral ratios (mHVSRs). We analyzed 300 microtremor recordings from 17 vertical array sites in California, comparing mHVSRs at varying spatial separations. We find that low mHVSR spatial correlation, as measured using Longest Common Subsequence, tends to occur at vertical array sites that are poorly modeled by GRA. Conversely, stronger mHVSR correlations tend to occur at sites where GRA is relatively effective.

Cover page of Monolithic Polyepoxide Membranes for Nanofiltration Applications and Sustainable Membrane Manufacture.

Monolithic Polyepoxide Membranes for Nanofiltration Applications and Sustainable Membrane Manufacture.

(2024)

The present work details the development of carbon fiber-reinforced epoxy membranes with excellent rejection of small-molecule dyes. It is a proof-of-concept for a more sustainable membrane design incorporating carbon fibers, and their recycling and reuse. 4,4-methylenebis(cyclohexylamine) (MBCHA) polymerized with either bisphenol-A-diglycidyl ether (BADGE) or tetraphenolethane tetraglycidylether (EPON Resin 1031) in polyethylene glycol (PEG) were used to make monolithic membranes reinforced by nonwoven carbon fibers. Membrane pore sizes were tuned by adjusting the molecular weight of the PEG used in the initial polymerization. Membranes made of BADGE-MBCHA showed rejection of Rose Bengal approaching 100%, while tuning the pore sizes substantially increased the rejection of Methylene Blue from ~65% to nearly 100%. The membrane with the best permselectivity was made of EPON-MBCHA polymerized in PEG 300. It has an average DI flux of 4.48 LMH/bar and an average rejection of 99.6% and 99.8% for Rose Bengal and Methylene Blue dyes, respectively. Degradation in 1.1 M sodium hypochlorite enabled the retrieval of the carbon fiber from the epoxy matrix, suggesting that the monolithic membranes could be recycled to retrieve high-value products rather than downcycled for incineration or used as a lower selectivity membrane. The mechanism for epoxy degradation is hypothesized to be part chemical and part physical due to intense swelling stress leading to erosion that leaves behind undamaged carbon fibers. The retrieved fibers were successfully used to make another membrane exhibiting similar performance to those made with pristine fibers.

Cover page of A maching learning-based analysis of liquefaction input factors using the Next Generation Liquefaction database

A maching learning-based analysis of liquefaction input factors using the Next Generation Liquefaction database

(2024)

Liquefaction triggering is typically predicted using fully-empirical and/or semi-empirical models. Hence, such models are heavily reliant upon available liquefaction (and/or lack thereof) case history data. These predictive models are based on a variety of factors, describing the demand (i.e., the cyclic stress ratio, CSR in existing legacy models) and the capacity (i.e., the cyclic resistance ratio, CRR). However, the degree to which these factors truly affect models’ performance is unknown. To explore this aspect and quantitatively rank the importance of liquefaction input model parameters, we leverage a Random Forest Machine Learning (ML) approach using two methods: (1) a feature importance metric based on the Gini impurity index, and (2) a SHapley Additive exPlanations (SHAP)-based approach. Both approaches were employed using typical input factors used in legacy liquefaction triggering models based on cone penetration test (CPT) data. Such analyses were performed using all reviewed (i.e., fully vetted) data in the Next Generation Liquefaction (NGL) database. Our analysis then separately explores the impact on resulting models of seven input parameters. We show that the most important input parameters are: (1) the peak ground acceleration, (2) the soil behavior type index, and (3) the earthquake magnitude (which serves as a proxy for duration in such models). The input parameters with the lowest importance are the total and the effective vertical stresses. A limitation of this analysis is that the ML model used does not allow for extrapolation beyond the range of the data. As a result, for input parameters with narrow distributions of the data (i.e., somewhat limited parameter space), a lower ranking could be associated with such limited availability of a wide range of values, rather than being related to actual low importance. This limitation likely accounts for the low importance attached to stress-related input parameters since legacy case histories are generally related to shallower (<10m) depths.

CEERS: 7.7 μm PAH Star Formation Rate Calibration with JWST MIRI

(2024)

We test the relationship between UV-derived star formation rates (SFRs) and the 7.7 μm polycyclic aromatic hydrocarbon luminosities from the integrated emission of galaxies at z ∼ 0-2. We utilize multiband photometry covering 0.2-160 μm from the Hubble Space Telescope, CFHT, JWST, Spitzer, and Herschel for galaxies in the Cosmic Evolution Early Release Science (CEERS) Survey. We perform spectral energy distribution (SED) modeling of these data to measure dust-corrected far-UV (FUV) luminosities, L FUV, and UV-derived SFRs. We then fit SED models to the JWST/MIRI 7.7-21 μm CEERS data to derive rest-frame 7.7 μm luminosities, L 770, using the average flux density in the rest-frame MIRI F770W bandpass. We observe a correlation between L 770 and L FUV, where log L 770 ∝ ( 1.27 ± 0.04 ) log L FUV . L 770 diverges from this relation for galaxies at lower metallicities, lower dust obscuration, and for galaxies dominated by evolved stellar populations. We derive a “single-wavelength” SFR calibration for L 770 that has a scatter from model estimated SFRs (σ ΔSFR) of 0.24 dex. We derive a “multiwavelength” calibration for the linear combination of the observed FUV luminosity (uncorrected for dust) and the rest-frame 7.7 μm luminosity, which has a scatter of σ ΔSFR = 0.21 dex. The relatively small decrease in σ suggests this is near the systematic accuracy of the total SFRs using either calibration. These results demonstrate that the rest-frame 7.7 μm emission constrained by JWST/MIRI is a tracer of the SFR for distant galaxies to this accuracy, provided the galaxies are dominated by star formation with moderate-to-high levels of attenuation and metallicity.

Cover page of Recommendations on best available science for the United States National Seismic Hazard Model

Recommendations on best available science for the United States National Seismic Hazard Model

(2024)

The 50 state update to the 2023 United States National Seismic Hazard Model (NSHM) is the latest in a sequence published by the U. S. Geological Survey (USGS). The 2023 NSHM is intended for use in building codes and similar applications at return periods of 475 years (corresponding to exceedance probabilities of 10% in 50 years) or longer. In reviewing the model, the NSHM Program Steering Committee, consisting of the authors of this paper, considered the characteristics of “best available science” that are applicable to the NSHM. Best available science must perform better than the previous NSHM, and there should be no available alternatives that could improve the models. The following are suggested characteristics of “best available science”: A) Clear objectives B) Rigorous conceptual model C) Timely, relevant and inclusive D) Verified and reproducible E) Validated intermediate and final models F) Replicable within uncertainties G) Peer reviewed H) Permanent documentation This article focuses on the justification for, and intent of, the above criteria for best available science.

Cover page of Relative contributions of different sources of epistemic uncertainty on seismic hazard in California

Relative contributions of different sources of epistemic uncertainty on seismic hazard in California

(2024)

We evaluate the relative impact of three sources of epistemic uncertainty on probabilistic seismic hazard analyses in California: source model uncertainty, ground motion model (GMM) uncertainty, and site parameter uncertainty. Seismic source model uncertainty is inherently contained in the source model framework applied by the USGS in the 2023 National Seismic Hazard Model (NSHM23); we have added tools to extract this uncertainty for California sites in the open-source seismic hazard software OpenSHA. GMM uncertainty is generally accounted for using alternative models in PSHA or a single backbone model with a defined uncertainty. Site parameter uncertainty refers to uncertainty in the shear wave velocity of the upper 30 meters of the site profile (VS30) and potentially other independent site parameters. We demonstrate the impacts of these major sources of epistemic uncertainty at the sites of two UC campuses, Berkeley, which is located near the active Hayward fault, and Davis, which is located in the relatively quiescent Central Valley. We investigate potential correlations between the different sources of uncertainty and find that source uncertainty is practically independent of GMM and VS30 uncertainty at Berkeley but dependent on GMM and VS30 at Davis. At both locations, GMM and site parameter uncertainty are correlated (i.e., inter-dependent). We represent epistemic uncertainty in ground motion with a period-dependent log-normal standard deviation term that is specific to a given site location, site condition, and exceedance frequency. We show that at Berkeley, the total epistemic uncertainty can be well approximated by the square root sum of squares (SRSS) of source uncertainty (i.e., uncertainty in ground motions related solely to the source model) and the combined GMM and site parameter uncertainty. We find that combined GMM and VS30 uncertainty is comparable to or greater than the source uncertainty at many oscillator periods at both sites. Combined uncertainties range from natural log standard deviations of about 0.2 at short periods to 0.6 at Berkeley and 0.3-0.7 at Davis at long periods.